I am writing a service with Spring and I am using Spring AMQP in order to connect to Rabbitmq.
I have two rabbitmq clusters, one is only for publishing messages(the messages are sent to the other cluster via the federation plugin) and the other cluster is for declaring queues that end users will consume from.
The nodes sit behind aws lb, each cluster has a lb.
I am using CachingConnectionFactory and RabbitTemplate,RabbitAdmin in my code and I want to have connections to all the nodes so I can use them.
For the cluster that will contain the queues I added to the config the queue-master-locator=random so new queues will be declared in all the nodes in the cluster even if my service does not have a connection to them.
With the cluster that publishes messages I have more of a problem because I need a direct connection in my service to each of the nodes so I will be able to separate the load between the nodes.
So my problem is, how do I create connections in my service to all the nodes in the cluster so they will all be used for declaring queues and sending messages?
Now, after I will have some sort of solution to this issue, the next issue will be what happens when a new node is added to the cluster? How can I create a connection to it and start using it as well?
I am using Rabbitmq - 3.7.9, Spring - 2.0.5, Spring AMQP - 2.0.5
Thanks alot!
There is currently no mechanism to do anything like that.
By default, Spring AMQP opens only one connection (optionally two, one for publishing, one for consuming).
Even when using CacheMode.CONNECTION, you'll get a new connection for each consumer (and connections will be created and cached on demand for producers), you won't get any control as to which node it connects to; that's a function of the LB.
The framework does provide the LocalizedQueueConnectionFactory which will try to consume from the node that hosts a queue, but it won't work with a load balancer in place.
In general, however, such optimization is rarely needed.
Are you trying to solve an actual problem you are experiencing now, or something that you perceive that might be a problem?
It is generally best not to perform premature optimization.
Related
We are currently writing a library that consumes rabbitmq events with spring-amqp.
This library needs to be used from some applications that themselves use rabbitmq with spring-amqp.
Is it possible to isolate the separate RabbitMQ Configurations from each other, so that the configurations form within the library dont interfere with the existing ones in the applications?
both would connect to the same rabbitmq cluster.
I looked through the documentation of spring-amqp but only found a way to split the rabbit configuration for consuming and producing events.
Since spring-amqp 2.3 there's a Multiple Broker (or Cluster) Support which could be used to create multiple connections to the same broker. You can find a sample config at this link.
Also, you can take a look at the spring-multirabbit library (https://github.com/freenowtech/spring-multirabbit) which is actually the ancestor to that feature in spring-amqp and can be used to add multiple RabbitMq connections support to a service that already has a Spring-configured connection in a non-intrusive way.
We have configured our ActiveMQ message broker as a Spring Boot project and there's another Spring Boot application (let's call it service-A) that has a listener configured to listen to some topics using #JmsListener annotation. It's a Spring Cloud microservice appilcation.
The problem:
It is possible that service-A can have multiple instances running.
If we have 2 instances running, then any message coming on topic gets listened to twice.
How can we avoid every instance listening to the topic?
We want to make sure that the topic is listened to only once no matte the number of service-A instances.
Is it possible to run the microservice in a cluster mode or something similar? I also checked out ActiveMQ virtual destinations but not too sure if that's the solution to the problem.
We have also thought of an approach where we can decide who's the leader node from the multiple instances, but that's the last resort and we are looking for a cleaner approach.
Any useful pointers, references are welcome.
What you really want is a shared topic subscription which was added in JMS 2. Unfortunately ActiveMQ 5.x doesn't support JMS 2. However, ActiveMQ Artemis does.
ActiveMQ Artemis is the next generation broker from ActiveMQ. It supports most of the same features as ActiveMQ 5.x (including full support for OpenWire clients) as well as many other features that 5.x doesn't support (e.g. JMS 2, shared-nothing high-availability using replication, last-value queues, ring queues, metrics plugins for integration with tools like Prometheus, duplicate message detection, etc.). Furthermore, ActiveMQ Artemis is built on a high-performance, non-blocking core which means scalability is much better as well.
I want to use Spring Batch remote partitioning to handle large workloads on the cloud, and spin up/shutdown VMs on demand.
However, when configuring the slave steps, I'm using the StepExecutionRequestHandler to handle the step requests from a JMS queue. Right now the application just hangs. How can I shut down the application after the queue is depleted?
How can I shut down the application after the queue is depleted?
In a remote partitioning setup, workers are listeners on a queue on which StepExecutionRequests are coming. The question is how to know, from the listener point of view, that the queue is depleted? This is a tricky design problem. There are some known solutions like the "End-Of-Stream" message or "Poison" record but those are tricky too since you have to make sure all listeners get one such message.
If you are using Spring Cloud Task to launch your workers, you can use the DeployerPartitionHandler which provides an elegant way to dynamically create workers on demand up to a maximum configurable number. You can find more details about it here: https://docs.spring.io/spring-cloud-task/docs/2.0.0.RELEASE/reference/htmlsingle/#batch-partitioning and an example in this github repo: https://github.com/mminella/scaling-demos/blob/master/partitioned-demo/src/main/java/io/spring/batch/partitiondemo/configuration/BatchConfiguration.java#L75
The ice on the cake is that this is based on Spring Cloud Deployer which means you can use it on any cloud provider that implements the SCD SPI. Here is how to do it for:
Kubernetes: https://docs.spring.io/spring-cloud-task/docs/2.0.0.RELEASE/reference/htmlsingle/#_notes_on_developing_a_batch_partitioned_application_for_the_kubernetes_platform
cloud foundry: https://docs.spring.io/spring-cloud-task/docs/2.0.0.RELEASE/reference/htmlsingle/#_notes_on_developing_a_batch_partitioned_application_for_the_cloud_foundry_platform
I've managed to get Spring Xd working for a scenario where I have data coming in from one JMS broker.
I potentially am facing a scenario where data ingestion could happen from different sources thereby needing me to connect to different brokers.
Based on my current understanding, I'm not quite sure how to do this as there exists a JMS config file which allows you to setup only one broker.
Is there a workaround to this?
At the moment, you would have to create a separate jms-[provider]-infrastructure-context.xml for each broker (in modules/common), say call the provider activemq2.
Then use --provider=activemq2 in the module definition.
(I recently used this technique to test sonicmq and hornetq providers).
To process a large number of messages coming to a queue i need guarantee of at least one jms connection to be there at any time. I am using spring and spring allows to have multiple sessions on a single connection only. In case one and only connection fails, application will come to standstill till spring reconnects to the JMS bridge.
So how can i create more than one connection to a queue in Spring, also how can i do connection pooling here.
The answer to this depends on whether you are using Spring inside a J2EE container(jboss etc.) or in a standalone application.
Standalone - you'll find pooling connections to be a problem. Springs SingleConnectionFactory can be setup to renew the connection on an exception garaunteeing that at some point a connection will come online and start processing the queue again, but you'll still have the problem of waiting for that single connection to renew, plus depending on what messaging implementation your dealing with and how it does load balancing you may find yourself stuck with a connection to a single node in a cluster.
If you are running in a container you can rely on the containers connection factory which will be much more robust. JBoss Messaging in the container for instance will failover seamlessly to other nodes and handles pooling under the covers, but if your working in the container its usually easier to bail on JMS template which kind of sucks and use whatever that container provides.