Using RabbitMQ stomp adapter to relay message across subscriptions in different servers - spring

I am using Spring to setup Stomp server endpoints (extending AbstractWebSocketMessageBrokerConfigurer)
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableStompBrokerRelay("/topic","/queue")
.setRelayHost(<rmqhost>);
}
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/myapp/websockets").setAllowedOrigins("*");
}
The objective is that I can have multiple servers, and a client will connect to any one of them for a specific topic: /topic/topic-id-1
Any of the server (at a time) can send a message for this topic using Spring's SimpMessagingTemplate
messagingTemplate.convertAndSend(destination, message);
where destination = "/topic/topic-id-1".
For ex: I have 2 server nodes and a client connecting to each one of them, subscribing to the same topic (/topic/topic-id-1). The objective is that if server 1 sends a message for topic-id-1, it should relay via rabbitmq to both clients subscribing to the same topic. I see a queue being created with routing key as "topic-id-1", but only the client connecting to the server sending out the message explicitly receives it. Am I missing something here? Isn't RMQ stomp broker supposed to relay the message send by one server for a subscription, across all the subscriptions for the same topic? Does the server need to do something else to get messages sent by other node?

I met the same problem. After a whole day explored, I found the solution finally!! It's easy to configure though.
registry.enableStompBrokerRelay("/topic/", "/queue/", "/exchange/")
.setUserDestinationBroadcast("/topic/log-unresolved-user")
.setUserRegistryBroadcast("/topic/log-user-registry")
The only thing you need to do is configure setUserDestinationBroadcast and setUserRegistryBroadcast when you enable the StompBrokerRelay. And it works!
I found the solution from here. Thinks that guy!

I'm not sure if this is exactly the same thing but I just solved a very similar problem. I posted my answer here: Sending STOMP messages from other layers of an application
I decided to split the implementation of the relay server into it's own setup and then manually forward messages between the rabbitmq server and the websocket subscribers on each of the servers.
Hopefully this can be of some use for you.

Related

Spring integration - Sending message from TCP server with gateway

I am attempting to create a microservice which acts as both server and client for testing purposes.
My client-side will connect as client-mode connected to a remote microservice which connects at the same time with my server-side, so I can send messages being both client or server and get their replies.
Client side of μs1 <-> Server side of μs2 <-> Client side of μs2 <-> Server side of μs1
I have tried to make an incoming and outgoing integration flow for each side (see the client one below) with TcpSendingMessageHandler and TcpReceivingChannelAdapter but it's not possible to retrieve the reply sent by their counterparts as they are one-way component and don't wait for any replies to produce to the replyChannel header for my TcpClientGateway, so there is not response back.
#Bean
public IntegrationFlow incomingClient(final TcpReceivingChannelAdapter tcpReceivingChannelAdapter,
TcpServerEndpoint tcpServerEndpoint) {
return IntegrationFlows
.from(tcpReceivingChannelAdapter)
.handle(message -> { LOGGER.info("RECEIVING ON CLIENT: {}", tcpServerEndpoint.processMessage((byte[]) message.getPayload()));})
.get();
}
#Bean
public IntegrationFlow outgoingClient(final MessageChannel outboundChannel, final TcpSendingMessageHandler tcpSendingClientMessageHandler) {
return IntegrationFlows
.from(outboundChannel)
.handle(tcpSendingClientMessageHandler)
.get();
}
As far as I know i need to use TcpInboundGateway and TcpOutboundGateway components as they can manage these replies I need to get.
¿How could I implement this so once each side is connected with each other I can start sending a message with my server side and get the reply? ¿Is it possible to send a message being a server with an InboundGateway?
I need to send messages from any side of this flow no matter who starts the communication because it could be anyone.
Thanks.
You can't do that with gateways (unless you have two sets); for arbitrary peer to peer communication over a single connection, you have to use collaborating channel adapters. https://docs.spring.io/spring-integration/docs/current/reference/html/ip.html#ip-collaborating-adapters
When you receive a message, you will need to decide if it's a request or a reply. If it's a reply, you can send it to an aggregator, where you previously sent a copy of the request.
You will need some mechanism to correlate replies to requests; since TCP has no concept of a header, it would have to be something in the data.
There's a partial solution in the samples repo https://github.com/spring-projects/spring-integration-samples/tree/main/intermediate/tcp-client-server-multiplex - it is old and uses XML configuration, but the concepts are the same.

ActiveMQ failover reconnect message rolled back

I'm trying to find a solution to a problem to guarantee that a message is only over processed completely by a single consumer.
Lots of messages on a queue and a number of consumers read messages and process them writing out to a database. My messages are transacted so that if a consumer dies then the message goes back onto the queue for another consumer to process.
We have to have an active/passive configuration for activemq and this is causing the issue. If I stop the active activemq then the consumer reconnects to the other activemq as I am using the failover transport. This is fine but during the reconnect, the message is put back on the queueand the consumer is not made aware of this reconnection and continues to process. This leads to the situation where 2 consumers process the same message.
I would have liked to use a distributed transaction manager and this may happen in the future but for now I need a different solution.
If I don't use failover transport then I can hook into a JMSException listener and abort the consumer. Unfortunately this does not work when using failover transport.
I would like either to use failover transport for the initial connect (discover which of the activemqs are running) and then force failover not to reconnect... or use a different transport that allows a list of server to try but doesn't reconnect... or find away to listen to the reconnect.
Note that this happens sometimes with just one server using failover (reconnect).
I could do my over initial connect logic (hunting for the active server) but was going to check if there is another option
You can listen to transport events on the ActiveMQConnection by using a listener:
connection = (ActiveMQConnection)factory.createConnection();
connection.addTransportListener(new TransportListener() {
public void onCommand(Object command) {
// Do something
}
public void onException(IOException error) {
// Do something
}
public void transportInterupted() {
// Do something
}
public void transportResumed() {
// Do something
}
});
connection.start();
Note that in this example the listener is set on the Connection directly; however, you can set an instance on the ActiveMQConnectionFactory which will be assigned to each Connection instance that it creates.

Uncommon TCP flow with Spring Integration

I need suggestion how to implement, if it is possible, with the Spring integration the following TCP flow:
Only the server side is need.
The TCP server waits for the incoming connection
On connection of the client, server sends data to the client
Client replies with response
Server may reply immediately with the new data or wait for external application events to send new packages to the client.
In groovy the code could be demonstrated as follow:
def serverSocket = new ServerSocket(...)
def connSocket = serverSocket.accept()
connSocket.outputStream.write(...)
while(true) {
def readBuffer = new byte[256]
connSocket.inputStream.read(readBuffer)
if(needToSendBack(readBuffer)) {
connSocket.outputStream.write(...)
}
}
def sendByDemand(def data) {
connSocket.outputStream.write(data)
}
The method sendByDemand could be invoked from the separate thread.
Here is a list of problems which I marked for myself, which prevents me to implement it with the Spring Integration (2.x version):
As far as I understand, the standard "Service Activator" approach cannot work in this scenario, since it is "connection events" driven. So when the application decides to send the new data to the client it cannot use the Service Activator
I have no "On TCP connection" event. I found that version 3.0 comes with the events support in this area, but since I cannot upgrade to 3.0, I implemented the connection check with the interceptors on the connection factory. However, when I know that client is connected, trying using the Direct Channels to send message fails with "no subscribers" error.
If someone could post possible Spring configuration for this scenario or point to the similar flow example it may be very helpful.
Your use case is possible, but it would make your life easier if you could upgrade to 3.0.
'Dispatcher has no subscribers' means there is no consumer subscribed to that channel.
You need to show your configuration; you must use collaborating channel adapters for this (not a gateway).
You need to capture the connectionId of the connection when it is established, and use it to populate the ip_connectionId header so the outbound channel adapter knows which socket to which to write the message.

Spring 4 websocket not closing on application shutdown

So this is a strange one. I have a basic Spring 4 websockets application running on Glassfish 4 using RabbitMQ for the message broker, nothing fancy. I was testing the durability of the websocket clients (one in java and one in javascript using stomp.js and socks.js) and noticed that when I undeployed the application from glassfish both clients would think the websocket was still up. For fun I added a recurring ping request from each client to the server to mimic a heartbeat. When the application is up the ping request works great and I get pong responses from the server, but when I undeploy the app from glassfish (to simulate a disconnect) I still get successful ping and pong messages from the server. It seems to me that when the application is undeployed it should send out disconnect messages to all connected clients which would invoke their reconnect logic to hit another server in the cluster. Has anyone seen similar behavior??? Thanks for any help!
I think I have this one figured out. I failed to set the heartbeat configuration on the STOMP connection. Once I set these values I began seeing heartbeats sent to the client by the server, and when I pulled the plug on the web socket application the heartbeats stopped, as they should. After that point is was very easy to implement some reconnect logic based on the last time I received a heartbeat and if it was too old or not. Here is some sample code for configuring the STOMP client, most of this I pulled from the spring stock-portfolio stomp client example.
Inside the StompWebSocketHandler class you simply add this block of code. You would obviously set the heartbeatInterval variable to whatever value you desire.
public void afterConnectionEstablished(WebSocketSession session) throws IOException {
StompHeaderAccessor headers = StompHeaderAccessor.create(StompCommand.CONNECT);
headers.setAcceptVersion("1.1,1.2");
headers.setHeartbeat(heartbeatInterval, heartbeatInterval);
Message<byte[]> message = MessageBuilder.withPayload(new byte[0]).setHeaders(headers).build();
TextMessage textMessage = new TextMessage(new String(encoder.encode(message), DEFAULT_CHARSET));
session.sendMessage(textMessage);
}

RabbitMQ with Websocket and Gevent

I'm looking forward to develop a realtime API for my web application using Websocket. For this I'm using RabbitMQ as the broker and My backend is based on python (gevent + websocket),and Pika/Puka as rabbitmq client.
The problem I'm facing here is that, how we can use websocket to connect with rabbitMQ. After the initial websocket connection establishment, the socket object wait for new messages from client, and in the case of rabbitMQ, we need to setup a consumer for it, so it will process the message when it receive one. We can take this in this way,
Clients are established connection with server via full-duplex websocket.
All clients should act as RabbitMQ's consumer after initial websocket handshake, so they all get updates when a client gets some message.
When new message arrives at websocket, that client will send it to RabbitMQ, so at this time this client act as publisher.
The problem is Websocket wait for a new message, and the RabbitMQ consumer wait for new message on its channel, I'm failed to link these two cases.
I'm not sure whether this is a wrong method ...
I'm unable to find a method to implement this scenario.If I'm going wrong way or is there any alternate method ?, please help me to fix this.
Thank you,
Haridas N.
I implemented similar requirement with Tornado + websocket + RabbitMQ + Pika.
I think this were already known method. Here is my git repo for this web chat application.
https://github.com/haridas/RabbitChat
It seems very difficult to the similar thing with gevent/twisted because the rabbitMQ clients couldn't supporting the event loops of gevent/twisted.
The pika has tornado adapter, so that makes this easy to setup. Pika development team working on the twisted adapter also. I hope they will release it very soon.
Thanks,
Haridas N.
http://haridas.in.
A simple solution would be to use gevent.queue.Queue instances for inter-greenlet communication.

Resources