Stop consume message for Stream listener - spring-boot

I am looking for a way to stop consume messages with stream listener.
#StreamListener(MBinding.M_INPUT)
public void consumeMessage(Message<MerchantEvent> message) {
//handle when receive message
}
cloud:
stream:
bindings:
MInput:
destination: topicName
group: groupName
I have googled it but right now still have no idea how to stop consuming. Is there anyone who know it?

You can do it using the actuator (see Binding Visualization and Control). Or you can invoke the endpoint programmatically.
#SpringBootApplication
#EnableBinding(Sink.class)
public class So58795176Application {
public static void main(String[] args) {
SpringApplication.run(So58795176Application.class, args);
}
#StreamListener(Sink.INPUT)
public void listen(String in) {
System.out.println();
}
#Autowired
BindingsEndpoint endpoint;
#Bean
public ApplicationRunner runner() {
return args -> {
System.in.read();
endpoint.changeState("input", State.STOPPED);
System.in.read();
endpoint.changeState("input", State.STARTED);
};
}
}

Related

Cant retrieve x-death header in rabbit listener springboot

I am using rabbit mq and I want to access the message retries but I always get a null reading x-death value as displayed in this example. However the message is read correctly.
#Component
#RabbitListener(queues = "myQueue")
public class AdyenNotificationMessageListener {
#RabbitHandler
public void processMessage(byte[] messageByte, #Header(name = "x-death", required = false) Map<?, ?> death) {
// death is always null
}
}
Using springboot version : '2.4.1' with spring-boot-starter-amqp'
Aany hint of what I maybe doing wrong would be highly appreciated.
It works fine for me; are you sure the message has the header?
#SpringBootApplication
public class So68231711Application {
public static void main(String[] args) {
SpringApplication.run(So68231711Application.class, args);
}
#Bean
Queue queue() {
return QueueBuilder.durable("so68231711")
.deadLetterExchange("")
.deadLetterRoutingKey("so68231711.dlq")
.build();
}
#Bean
Queue dlq() {
return new Queue("so68231711.dlq");
}
#RabbitListener(queues = "so68231711")
public void listen(String in) {
System.out.println(in);
throw new AmqpRejectAndDontRequeueException("toDLQ");
}
#RabbitListener(queues = "so68231711.dlq")
public void listenDlq(String in, #Header(name = "x-death", required=false) Map<?, ?> death) {
System.out.println(in);
System.out.println(death);
}
}
foo
... Execution of Rabbit message listener failed.
...
foo
{reason=rejected, count=1, exchange=, time=Tue Jul 06 11:09:30 EDT 2021, routing-keys=[so68231711], queue=so68231711}

Get Partition and Offset number in which kafka message is being processed using StreamBridge

I need to print/log/store the kafka partition and offset number in which my message is being processed.
How can I achieve that?
I am using StreamBridge to send the message from producer and also using functional spring kafka streams approach
Public delegateToSupplier(String id, Abc obj) {
Message<Abc> message = MessageBuilder.withPayload(obj).seHeaders(KafkaHeaders.MESSAGE_KEY, id.getBytes()).build();
streamBridge.send("out-topic", message);
}
The record metadata is available (asynchronously) via the metadata channel:
#SpringBootApplication
public class So66436499Application {
public static void main(String[] args) {
SpringApplication.run(So66436499Application.class, args);
}
#Autowired
StreamBridge bridge;
#Bean
public ApplicationRunner runner() {
return args -> {
this.bridge.send("myBinding", "test");
Thread.sleep(5000);
};
}
#ServiceActivator(inputChannel = "meta")
void meta(Message<?> sent) {
System.out.println("Sent: " + sent.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class));
}
}
spring.cloud.stream.bindings.myBinding.destination=foo
spring.cloud.stream.kafka.bindings.myBinding.producer.record-metadata-channel=meta
Sent: foo-0#5

Spring cloud kafka stream application terminates on exception

I have simple spring cloud kafka stream application. The application terminates each time there is an exception and I'm unable to overwrite this behaviour. The desired outcome is incremental backoff when there are certain types of exceptions or to continue on other type of exceptions. I use springCloudVersion - Hoxton.SR3 and spring boot: 2.2.6.RELEASE
application.yaml
spring:
cloud:
stream:
binders.process-in-0:
destination: test
kafka:
streams:
binder:
deserializationExceptionHandler: logAndContinue
configuration:
default.key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
default.value.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
Beans
#Bean
public java.util.function.Consumer<KStream<String, String>> process() {
return input -> input.process(() -> new EventProcessor());
}
#Bean
public StreamsBuilderFactoryBeanCustomizer customizer() {
return fb -> {
fb.getStreamsConfiguration().put(StreamsConfig.DEFAULT_PRODUCTION_EXCEPTION_HANDLER_CLASS_CONFIG,
ContinueOnErrorHandler.class);
};
}
EventProcessor
public class EventProcessor implements Processor<String, String>, ProcessorSupplier<String, String> {
private ProcessorContext context;
#Override
public void init(ProcessorContext context) {
this.context = context;
}
#Override
public void process(String key, String value) {
throw new RuntimeException("Some exception");
}
#Override
public void close() {
}
#Override
public Processor<String, String> get() {
return this;
}
}
ContinueOnErrorHandler
public class ContinueOnErrorHandler implements ProductionExceptionHandler {
#Override
public ProductionExceptionHandlerResponse handle(ProducerRecord<byte[], byte[]> record, Exception exception) {
return ProductionExceptionHandlerResponse.CONTINUE;
}
#Override
public void configure(Map<String, ?> configs) {
//ignore
}
}
The custom processor you are using from the consumer is throwing a RuntimeException in the process method. It is not caught by anything. When that exception is thrown, the application simply exits.
The production exception handler that you are using does not have any effect here since you are not producing anything here. Consumer does not produce anything. If you have a use case of producing something, you should switch to java.util.funciton.Function instead.
In order to fix the issue here, as you are processing the record in the custom processor (EventProcessor), if you get an exception, you should catch it and take appropriate actions. For e.g, here is a template:
#Override
public void init(ProcessorContext context) {
this.context = context;
}
#Override
public void process(String key, String value) {
try {
// start processing
// exception thrown
}
catch (Exception e){
// Take the appropriate action
}
}
This way, the application won't be terminated when the exception is thrown in the processor.

How to ensure Spring Cloud Stream Listener to wait to process messages until Application is fully initialized on Start?

With Spring Cloud Stream Kafka app, how can we ensure that the stream listener waits to process messages until some dependency tasks (reference data population, e.g.) are done? Below app fails to process messages because messages are delivered too early. How can we guarantee this kind of ordering within a Spring Boot App?
#Service
public class ApplicationStartupService implements ApplicationRunner {
private final FooReferenceDataService fooReferenceDataService;
#Override
public void run(ApplicationArguments args) throws Exception {
fooReferenceDataService.loadData();
}
}
#EnableBinding(MyBinding.class)
public class MyFooStreamProcessor {
#Autowired FooService fooService;
#StreamListener("my-input")
public void process(KStream<String, Foo> input) {
input.foreach((k,v)-> {
// !!! this fails to save
// messages are delivered too early before foo reference data got loaded into database
fooService.save(v);
});
}
}
spring-cloud-stream: 2.1.0.RELEASE
spring-boot: 2.1.2.RELEASE
I found this is not available in spring cloud stream as of May 15, 2018.
Kafka - Delay binding until complex service initialisation has completed
Do we have a plan/timeline when this is supported?
In the mean time, I achieved what I wanted by using #Ordered and ApplicationRunner. It's messy but works. Basically, stream listener will wait until other works are done.
#Service
#Order(1)
public class ApplicationStartupService implements ApplicationRunner {
private final FooReferenceDataService fooReferenceDataService;
#Override
public void run(ApplicationArguments args) throws Exception {
fooReferenceDataService.loadData();
}
}
#EnableBinding(MyBinding.class)
#Order(2)
public class MyFooStreamProcessor implements ApplicationRunner {
#Autowired FooService fooService;
private final AtomicBoolean ready = new AtomicBoolean(false);
#StreamListener("my-input")
public void process(KStream<String, Foo> input) {
input.foreach((k,v)-> {
while (ready.get() == false) {
try {
log.info("sleeping for other dependent components to finish initialization");
Thread.sleep(10000);
} catch (InterruptedException e) {
log.info("woke up");
}
}
fooService.save(v);
});
}
#Override
public void run(ApplicationArguments args) throws Exception {
ready.set(true);
}
}

Spring cloud stream kafka - A subscribable channel has no output

I have an application which does a lot of data processing (in the order of ~1.3 million at a time) which happens in bursts. The application consumes data from a kafka topic.
I'm using a version 2.0.1 of spring-cloud-stream-starter-kafka to consume data.
My code is as follows:
Listener:
#Service
public class ListenerService {
#Autowired
private Application2<Foo> application;
#Override
#StreamListener(FooStreams.INPUT)
public void subscribe(#Payload Foo foo) {
application.sync(foo);
}
}
Streams:
public interface FooStreams {
String INPUT = "Foo";
#Input(value = INPUT)
SubscribableChannel subscribe();
}
In the main application, I've bound the stream to kafka like this:
#SpringBootApplication
#EnableBinding({FooStreams.class})
public class Application {
private static final Logger logger = LoggerFactory.getLogger(Application.class);
public static void main(String[] args) {
try {
SpringApplication.run(Application.class, args);
}
catch (Exception e) {
logger.error("Application failed to start");
}
}
}
Is there something I am missing? The issue is that I can see that the memory utilization spikes up during the time of data processing, which doesn't come down after the processing is done.

Resources