I'm using Spring Boot with RabbitMq, and a question came up, is there a limited number of consumers I can create?
Where do I find this value?
Spring-Amqp has no limit for consumer number.
But usually it will be restricted by other things. For example, if you use SimpleMessageListner, one consumer corresponds to one thread. When your number of consumers is large, your app may not be able to create so many threads, resulting in OOM: unable to create new native thread.
// OOM in my computer
#RabbitListener(queues = "testq", concurrency = "10000-10000")
public void listen() {
}
If you use a CachingConnectionFactory(connections), set CacheMode to CONNECTION, maybe your rabbitmq server cannot carry a very large number of consumers (probably hit the maximum number of file descriptors.), and your app may not be able to connect to rabbitmq.
Related
I'm using spring-kafka to consume messages from two Kafka topics, which sends same message format as below.
#KafkaListener(topics = {"topic_country1", "topic_country2"}, groupId = KafkaUtils.MESSAGE_GROUP)
public void onCustomerMessage(String message, Acknowledgment ack) throws Exception {
log.info("Message : {} is received", message);
ack.acknowledge();
}
Can KafkaListener allocate the number of consumer threads according to the number of topics that it listens by it's own and parallel process messages in two topics? Or it doesn't support parallel processing and messages have to wait in the topic till one message gets processed?
In case if the number of messages in the topic is higher, I need to autoscale my micro-service to start new instances (till the number of partitions). What are the parameters (CPU, memory) I can depend on to find out the number of messages in the topics is higher from KafkaListener point of view? (i.e In an API I can auto-scale the service by monitoring the HTTP latency)
You can set the concurrency property to run more threads; but each partition can only be processed by one thread. To increase concurrency you must increase the number of partitions in each topic. When listening to multiple topics in the same listener, if those topics only have one partition, you may not get the concurrency you desire unless you change the kafka consumer partition assignor.
See https://docs.spring.io/spring-kafka/docs/2.5.0.RELEASE/reference/html/#using-ConcurrentMessageListenerContainer
When listening to multiple topics, the default partition distribution may not be what you expect. For example, if you have three topics with five partitions each and you want to use concurrency=15, you see only five active consumers, each assigned one partition from each topic, with the other 10 consumers being idle. This is because the default Kafka PartitionAssignor is the RangeAssignor (see its Javadoc). For this scenario, you may want to consider using the RoundRobinAssignor instead, which distributes the partitions across all of the consumers. Then, each consumer is assigned one topic or partition. ...
If you want to scale horizontal beyond the partition count and dynamically - consider using something like Parallel Consumer (PC). It can be used within a Spring context.
By using PC, you can processing all your keys in parallel, regardless of how long it takes to process, and you can be as concurrent as you wish - and this can scale dynamically.
PC directly solves for this, by sub partitioning the input partitions by key and processing each key in parallel.
It also tracks per record acknowledgement. Check out Parallel Consumer on GitHub (it's open source BTW, and I'm the author).
I want a bunch of several hundred client apps to create and use temporary queues at one instance of the middleware.
Are there some cons regarding performance why I shouldn't use temp queues? Are there limitations, for example on how many temp. queues can be created per HornetQ instance?
On a recent project we have switched from using temporary queues to using static queues on SonicMQ. We had implemented synchronous service calls over JMS where the response of each call would be delivered on a dedicated temporary queue, created by the consumer. During stress testing we noticed that the overhead of temporary queue creation and allocated resources started to play a bigger and bigger part when pushing the maximum throughput of the solution.
We changed the solution so it would use static queues between consumer and provider and use a selector to correlate on the JMSCorrelationID. This resulted in better throughput in our case. If you are planning on each time (re)creating the temporary queues that your client applications will use, it could start to impact performance when higher throughput rates are needed.
Note that selector performance can also start to play when the number of messages in a queue increase. In our case the solution was designed to hand-off the messages as soon as possible and not play the role of a (storage) buffer in between consumer and provider. As such the number of message inside a queue would always be low.
Recently we noticed that our Nservicebus subscribers were not able to handle the increasing load. We have a fairly constant input stream of events (measurement data from embedded devices), so it is very important that the throughput follows the input.
After some profiling, we concluded that it was not the handling of the events that was taking a lot of time, but rather the NServiceBus process of retrieving and publishing events. To try to get a better idea of what goes on, I recreated the Pub/Sub sample (http://particular.net/articles/nservicebus-step-by-step-publish-subscribe-communication-code-first).
On my laptop, using all the NServiceBus defaults, the maximum throughput of the Ordering.Server is about 10 events/second. The only thing it does is
class PlaceOrderHandler : IHandleMessages<PlaceOrder>
{
public IBus Bus { get; set; }
public void Handle(PlaceOrder message)
{
Bus.Publish<OrderPlaced>
(e => { e.Id = message.Id; e.Product = message.Product; });
}
}
I then started to play around with configuration settings. None seem to have any impact on this (very low) performance:
Configure.With()
.DefaultBuilder()
.UseTransport<Msmq>()
.MsmqSubscriptionStorage();
With this configuration, the throughput instantly went up to 60 messages/sec.
I have two questions:
When using MSMQ as subscription storage, the performance is much better than RavenDB. Why does something as trivial as the storage for subscription data have such an impact?
I would have expected a much higher performance. Are there any other configuration settings that I should use to get at least one order of magnitude better than this? On our servers, the maximum throughput when running this sample is about 200 msg/s. This is far from spectacular for a system that doesn't even do anything useful yet.
MSMQ doesn't have native pub/sub capabilities so NServiceBus adds support this by storing the list of subscribers and then looping over that list sending a copy of the event to each of the subscribers. This translates to X message queuing operations where X is the number of subscribers. This explains why RabbitMQ is faster since it has native pub/sub so you would only need one operation against the broker.
The reason the storage based on a msmq queue is faster is that it's a local storage (can't be used if you need to scaleout the endpoint) and that means that we can cache the data since that can't be any other endpoint instances updating the storage. In short this means that we get away with a in memory lookup which as you can see is the fastest option.
There are plans to add native caching across all storages:
https://github.com/Particular/NServiceBus/issues/1320
200 msg/s sounds quite low, what number do you get if you skip the bus.Publish? (just to get a base line)
Possibility 1: distributed transactions
Distributed transactions are created when processing messages because of the combination Queue-Database.
Try measuring without transactional handling of the messages. How does that compare?
Possibility 2: msmq might not be the best queueing system for your needs
Ever considered switching to rabbitmq for transport? I have very good experiences with RabbitMq in combination with MassTransit. Way exceed the numbers you are mentioning in your question.
I know Spring but I am a newbie in JMS and started reading the Spring JMS. From the Spring JMS doc Spring doc, I read the following
The number of concurrent sessions/consumers to start for each listener.
Can either be a simple number indicating the maximum number (e.g. "5")
or a range indicating the lower as well as the upper limit (e.g. "3-5").
Note that a specified minimum is just a hint and might be ignored at
runtime. Default is 1; keep concurrency limited to 1 in case of a topic
listener or if queue ordering is important; consider raising it for
general queues.
Just want to understand why should the concurrency limited to 1 in case of a topic listener? If I increase it, say to 10 instead of 1, what would happen?
Each subscriber receives a copy of each message published to a Topic. It makes no sense at all to put up multiple consumer, since all your application would do is to receive the same message 10 times, in different threads.
In the case of a Queue, messages on the queue would be distributed among 10 threads and hence, handled concurrently. That is indeed a very common scenario - load balancing.
I am using the Spring DefaultMessageListenerContainer to gain some dynamic benefits in setting the MessageSelector value since I am using the Glassfish OpenMQ which is not that advanced in that regards.
Let's have a JMS message. The listener issues a specific failure that means: retry after x seconds. It tries again with failure: retry after x*y seconds, and so on the time grows exponentially. If you cannot handle it after z retries, consider it as a poison JMS message.
DefaultMessageListenerContainer dmlc;
dmlc.stop();
dmlc.setMessageSelector(String.format("retries < %d AND retryTime <= %d", z, System.currentTimeMillis()));
dmlc.start();
I am not that satisfied with this solution, especially, when the Spring docs raise warning here:-). However, for the moment things meet our needs.
Now, I have a number of EJBs message consumers on different applications. Some of them need such dynamic changes of the messageSelector. Unfortunately, and to-my-best-knowledge, EJB MDBs do not support such dynamic "features". For example, see this.
Is that correct? is there a workaround for an EJB solution? I would appreciate any help.
To achieve dynamic changes to the message selector, you'd need to implement it straight in JMS, e.g.
ConnectionFactory cf;
Connection connection = cf.createConnection();
session = connection.createSession(transactional, acknowledgeMode);
MessageConsumer messageConsumer = session.createConsumer(destination, "message selector");
Additionally, you'd need to place this code some place it executes on its own, perhaps in an asynchronous task? But you'd be reinventing the wheel, as Spring DMLC does that better.
I don't know why you're doing this:
for load balancing? The message broker should take care of this.
for handling temporary downtimes? The queue should be configured to be able to store appropriate number of messages, or switch delivery to other node in cluster.