We have written simple message sending mechanism to client (logged in user based) from server by using spring boot + websocket.
Currently its running in a single server, which is working fine.
But our production servers running under load balancing environment.
How could we achieve where the messages are pushed from server nodes send to appropriate users.
Please advice the possibilities, I have read some articles about RabbitMQ with socketjs , but not clear will it work for load balancing.
Thanks
If you have multiple instances of your websocket server, then every instance needs to know the sessions that exists on other instances.
Therefore you need to use a broker relay (not the in-memory broker given by spring) and set the UserRegistryBroadcast property.
You can find some info related to this at the end of this talk https://www.youtube.com/watch?v=nxakp15CACY
Related
I am not able to analyse, how to go ahead. I am using Spring boot 2, Oracle, IBM MQ.
I have made 2 async requests to external applications. I need to do some operation when I have received both of the responses.
I am not able to set it up as there are multiple instances of application running and listening to same queue for response.
I tried using #transactional and cyclic barrier. But I guess they will work only in scope of their own instance and not between multiple instances.
How should I proceed ahead?
It is also really difficult to reproduce the scenario where one message is read by one instance and other by other instance that too at the same time, where they eventually try to update db at same time.
I have a embedded ActiveMQ instance in my Spring Boot app and I would like to consume a queue on that instance, from another processes or machine.
Is that possible?
Yes, that is definitely possible given the proper broker configuration. It just needs connectors that are bound to network interfaces visible from remote clients.
I am writing a service with Spring and I am using Spring AMQP in order to connect to Rabbitmq.
I have two rabbitmq clusters, one is only for publishing messages(the messages are sent to the other cluster via the federation plugin) and the other cluster is for declaring queues that end users will consume from.
The nodes sit behind aws lb, each cluster has a lb.
I am using CachingConnectionFactory and RabbitTemplate,RabbitAdmin in my code and I want to have connections to all the nodes so I can use them.
For the cluster that will contain the queues I added to the config the queue-master-locator=random so new queues will be declared in all the nodes in the cluster even if my service does not have a connection to them.
With the cluster that publishes messages I have more of a problem because I need a direct connection in my service to each of the nodes so I will be able to separate the load between the nodes.
So my problem is, how do I create connections in my service to all the nodes in the cluster so they will all be used for declaring queues and sending messages?
Now, after I will have some sort of solution to this issue, the next issue will be what happens when a new node is added to the cluster? How can I create a connection to it and start using it as well?
I am using Rabbitmq - 3.7.9, Spring - 2.0.5, Spring AMQP - 2.0.5
Thanks alot!
There is currently no mechanism to do anything like that.
By default, Spring AMQP opens only one connection (optionally two, one for publishing, one for consuming).
Even when using CacheMode.CONNECTION, you'll get a new connection for each consumer (and connections will be created and cached on demand for producers), you won't get any control as to which node it connects to; that's a function of the LB.
The framework does provide the LocalizedQueueConnectionFactory which will try to consume from the node that hosts a queue, but it won't work with a load balancer in place.
In general, however, such optimization is rarely needed.
Are you trying to solve an actual problem you are experiencing now, or something that you perceive that might be a problem?
It is generally best not to perform premature optimization.
I have messages coming in from Kafka. So I am planning to write a listener and "onMessage". I want to process it and push it in to solr.
So my question is more architectural, like I have worked on web apps all my career, so in big data how to deploy the spring kafka listener, so I can process thousands of messages a second.
How do I make my spring code use multiple nodes to distribute the
load?
I am planning to write a SpringBoot application to run in
a tomcat container.
If you use the same group id for all instances, different partitions will be assigned to different consumers (instances of your application).
So, be sure that you specified enough partitions in the topic you are going to consume.
I am new to spring and so not sure if what I intend to do is possible.
I need to create an asynchronous webservice and a worker server (broker), both using the model & controller aspects of spring.
The webservice needs to send it's client's requests on to the broker via JMS and then instantly send a response back to the client indicating the request has been queued.
The broker is intended to remain live, processing messages from multiple webservice instances and sending back the results via an output JMS queue. The reason the broker needs to remain live is because the work to process each webservice message involves calling other webservices, some of which may be asynchronous and which may take a lot of time to process.
Additionally I do not want to spawn multiple instances of the broker as it is designed to handle multiple concurrent messages.
Is it possible to create both the webservice and broker within the same spring project, with both running in a web container such as tomcat or do I need to code them in separate projects, with perhaps the broker as a traditional standalone server rather than a web container servlet?
If so could someone point me in the right direction to creating a stay-alive broker within spring/tomcat.
I understand the webservice and JMS side of things, so do not need any help with that.