The messages created by the producer are all being consumed as expected.
The thing is, I need to create an endpoint to retrieve the latest messages from the consumer.
Is there a way to do it?
Like an on-demand consumer?
I found this SO post but is only to consume the last N records. I want to consume the latest without caring about the offsets.
Spring Kafka Consumer, rewind consumer offset to go back 'n' records
I'm working with Kotlin but if you have the answer in Java I don't mind either.
There are several ways to create listener containers dynamically; you can then start/stop them on demand. To get the records back into the controller, you'd need to use something like a blocking queue, or make the controller itself a MessageListener.
These answers show a couple of techniques for creating containers on demand:
How to dynamically create multiple consumers in Spring Kafka
Kafka Consumer in spring can I re-assign partitions programmatically?
Related
I want to implement a Kafka Producer with Spring that observes a Cloud Storage and emits meta informations about newly arrived files.
Until now we did that with a Kafka Connector but for some reasons we now have to do this with a simple Kafka producer.
Now I need to persist the state of the producer (e.g. timestamp of last commited file) in a kind of Offset Topic like the Connector did, but did not find a reasonable approach to do that.
My current idea is to hold the state by committing it to a topic that the producer also consumes but just acknowledge the last consumed state when commuting a new one. So if the Kubernetes pod of the producer dies and comes up again to consume the last state (not acknowledged) and so knows where it stopped.
But this idea seems to be a bit complex to just hold a state of a Kafka app. Is there a better approach for that?
I have messages coming in from Kafka. So I am planning to write a listener and "onMessage". I want to process it and push it in to solr.
So my question is more architectural, like I have worked on web apps all my career, so in big data how to deploy the spring kafka listener, so I can process thousands of messages a second.
How do I make my spring code use multiple nodes to distribute the
load?
I am planning to write a SpringBoot application to run in
a tomcat container.
If you use the same group id for all instances, different partitions will be assigned to different consumers (instances of your application).
So, be sure that you specified enough partitions in the topic you are going to consume.
In my POC, I am using Spring Cloud Config and Spring Stream Rabbit. I want to dynamically change number of listeners (concurrency). Is it possible to do that? I want to do following:
1) If there are too many messages in queue, i want to increase concurrency level.
2) In scenario where my downstream system is not available, I want to stop processing messages from queue (in short concurrency level 0).
How i can achieve this?
Thanks for help.
The listener container running in the binder supports such changes (although you can't go down to 0, but the container can be stop() ped).
However, spring-cloud-stream provides no mechanism for you to get a reference to the listener container.
You might want to consider using a #RabbitListener from Spring AMQP instead - it will give you complete control over the listener container.
I am trying to get multiple listeners configured on same queue, but with different message selector. I am using Solace JMS provider.
The behavior is that the first loaded listener will have its selector registered and is receiving the messages.
the second listener is NOT receiving the message. And using Spring integration DSL 1.1.3
what could be wrong?
I tried with two different Queue connection factory, but could not get it working.
How can we have two Selective consumers configured ?
I think you should start from your vendor first of all and try to figure out if it supports concurrent selective consumers.
Although you have to bear in mind that with the Queue only one consumer accepts message anyway. So, if first one is able to process the message, the second one won't receive it, even with different selector.
Consider to switch to the Topic.
Our project is to integrate two applications, using the REST API of each and using JMS (to provide asynchronous nature). Application-1 writes the message on the queue. The next step is to read the message from the queue, process it, and send it to application2.
I have two questions:
Should we use one more queue for storing messages after processing and before sending them to application2?
Should we use spring batch or spring integration to read/process the data?
Or you don't show the whole premise, or you really try to overhead your app. If there is just need to read messages from the queue, there is just enough to use Spring JMS directly... From other side with the Spring Integration and its Adapters power you can just process messes from the <int-jms:message-driven-channel-adapter> to the <int-http:outbound-channel-adapter>.
Don't see reason to store message somewhere else in the reading and sending process. Just because with some exception here you just rollback your message to the JMS queue back.