can someone please tell me does spring-Kafka has a feature like spring-JMS which can dynamically spin up threads or reduce threads based on load?
By use Kafka, do we need to worry about this thread management stuff? I know that the best practice for Kafka consumer is to have an equal amount of threads as how many partitions you have on that topic.
Spring for Apache Kafka does not dynamically adjust the number of consumer threads in any way. The partitions will be distributed across the number of threads you configure.
You could query the topic and configure the container appropriately before starting it.
Related
My application have SQS listeners that have different needs. Some listeners require a bit of time to process so I would prefer to configure a low Max Message. Some listeners are quick and simple, so to optimize cost, I would prefer to set a higher Max Message.
With Spring Cloud Messaging, it seems you can only configure one SimpleMessageListenerContainerFactory which controls the max number of messages. Is there support for having multiple listener configurations?
I notice somebody asked this as well. But I'm not sure the answer makes sense.
I want build a single Spring Boot application which does multiple different tasks concurrently. I did research on the internet but I could not find any way out. Let me get into detail.
I would like to start jobs in certain intervals for example once a day. I can do it using Spring Quartz. I also would like to listen messages on a dedicated internet address. Messages will come from Apache Kafka platform. Thus, I would like to use Kafka integration for Spring framework.
Is it applicable practically (listening messages all the time and executing scheduled jobs on time)
Functionally speaking, this design is fine: a single Spring Boot app can consume Kafka messages while also executing quartz jobs.
But higher level, you should ask why these two functions belong in a single app. Is there some inherent relationship between the quartz jobs and Kafka messages being consumed? Are you just combining them solely to limit yourself to one app and save on compute/memory resources?
You should also consider the impacts to scalability. What if you need to increase the rate at which you consume Kafka messages? If you scale your app to get more Kafka consumers, you have to worry about multiple apps now firing your quartz jobs.
So yes, it can be done, but without any more detail it sounds like you should break this design into 2 separate applications: one for Quartz and one for Kafka consuming.
We're using spring cloud to serve asynchronous tasks. I wonder if there is any way to scale listeners set up by #StreamListener? The goal is to have multiple workers within one application instance.
I read about spring.cloud.stream.instancecount, but I don't want to replicate whole application, only increase workers count.
You should be able to accomplish that via spring.cloud.stream.bindings.input.consumer.concurrency consumer property. Here is more info
I have messages coming in from Kafka. So I am planning to write a listener and "onMessage". I want to process it and push it in to solr.
So my question is more architectural, like I have worked on web apps all my career, so in big data how to deploy the spring kafka listener, so I can process thousands of messages a second.
How do I make my spring code use multiple nodes to distribute the
load?
I am planning to write a SpringBoot application to run in
a tomcat container.
If you use the same group id for all instances, different partitions will be assigned to different consumers (instances of your application).
So, be sure that you specified enough partitions in the topic you are going to consume.
I've load testing different JMS implementations for our notification service.
No one of ActiveMQ, HornetQ and OpenMQ behave as expected (issues with reliability and message prioritization). But as now I've best results with OpenMQ. Expect two issues that's probably just missconfiguration (I hope). One with JDBC storage
Test scenario:
2 producers with one queue send messages with different priority. 1 consumer consuming from queue with constant speed that is slightly lower than producers produce. OpenMQ is running standalone and uses PostgreSQL as persistence storage. All messages are sended and consumed from Apache Camel route and it's all persistent.
Issues:
After about 50000 messages I see warnings and errors in OpenMQ log about low memory (default cinfiguration with 256Mb Heap size). Producing is throutled by broker and after some time broker stop dispatching at all. Broker JVM memory usage is on maximum.
How I must configure broker to achieve that goal:
Broker do not depend on queue size (up to 1 000 000 msgs) and on memory limit. Performance is not issue - only reliability.
Thats possible?
I can not help with OpenMQ but perhaps with Camel and ActiveMQ. What issues do you face with ActiveMQ? Can you post your camel route and eventually spring context and the activemq config?