In our current application, we use Spring Websockets over STOMP. We are looking to scale horizontally. Are there any best practices on how we should handle websocket traffic over multiple tomcat instances and how can we maintain session info across multiple nodes.Is there a working sample that one can refer to?
Horizontally scaling WebSockets is actually very different than horizontally scaling stateless/stateful HTTP only based applications.
Horizontally Scaling Stateless HTTP app: just spin up some application instances in different machines and put a load balancer in front of them. There are quite a lot different load balancer solutions such as HAProxy, Nginx, etc. If you are on a cloud environment such as AWS you could also have managed solutions such as Elastic Load Balancer.
Horizontally Scaling Stateful HTTP app: it would be great if we could have all applications being stateless everytime, but unfortunately that's not always possible. So, when dealing with stateful HTTP apps, you must care about the HTTP session, which is a basically a local storage for each different client where the web server can store data that is kept across different HTTP requests (such as when dealing with a Shopping Cart). Well, in this case, when scaling horizontally you should be aware that, as I said, it's a LOCAL storage, so ServerA will not be able to handle an HTTP session that is on ServerB. In other words, if for any reason Client1 that is being served by ServerA starts suddenly to be served by ServerB, his HTTP session will be lost (and his shopping cart will be gone!). The reasons could be a node failure or even a deployment.
In order to address this issue, you can't keep HTTP sessions only locally, that is, you must store them on another external component. That are several components that would be able to handle this, such as any relational database, but that would be actually an overhead. Some NoSQL databases can handle this key-value behavior very well, such as Redis.
Now, with the HTTP session being stored on Redis, if a client starts to be served by another server, it will fetch the client's HTTP session from Redis and load it into its memory, so everything will continue working and the user will not lost his HTTP session anymore.
You can use Spring Session to easily store the HTTP session on Redis.
Horizontally Scaling WebSocket app: When a WebSocket connection is established, the server must keep the connection opened with the client so that they can exchange data in both directions. When a client is listening to a destination such as "/topic/public.messages" we say the client is subscribed to this destination. In Spring, when you use the simpleBroker approach, the subscriptions are kept in memory, so what happens for instance if Client1 is being served by ServerA and wants to send a message using WebSocket to Client2 being served by ServerB? You already know the answer! The message will not be delivered to Client2 because Server1 not even know about the Client2's subscription.
So, in order to address this issue, again you have to externalize the WebSockets subscriptions. As you are using STOMP as a subprotocol, you need an external component that can act as a external STOMP broker. There are quite a lot tools able to do this, but I would suggest RabbitMQ.
Now, you must change your Spring configuration so that it will not keep the subscriptions in-memory. Instead, it will delegate the subscriptions to a external STOMP broker. You can easily achieve this with some basic configurations such as enableStompBrokerRelay.
The important thing to note is that HTTP session is different than WebSocket session. Using Spring Session to store the HTTP session in Redis has absolutely nothing to do with horizontally scaling WebSockets.
I've coded a complete Web Chat Application with Spring Boot (and much more) that uses RabbitMQ as a Full External STOMP Broker and it's public on GitHub so please clone it, run the app in your machine and see the code details.
When it comes to a WebSocket connection loss, there's not much that Spring can do. Actually, the reconnection must be requested by the client side implementing a reconnection callback function, for instance (that's the WebSocket handshake flow, the client must start the handshake, not the server). There are some client side libraries that can handle this transparently for you. That's not SockJS case. In the Chat Application I also implemented this reconnection feature.
Your requirement can be divided into 2 sub tasks:
Maintain session info across multiple nodes: You can try Spring Sessions clustering backed by Redis (see: HttpSession with Redis). This very very simple and already has support for Spring Websockets (see: Spring Session & WebSockets).
Handle websockets traffic over multiple tomcat instances: There are several ways to do that.
The first way: Using a full-featured broker (eg: ActiveMQ) and try new feature Support multiple WebSocket servers (from: 4.2.0 RC1)
The second way: Using a full-feature broker and implement a distributed UserSessionRegistry (eg: Using Redis :D ). The default implementation DefaultUserSessionRegistry using an in-memory storage.
Updated: I've written a simple implementation using Redis, try it if you are interested
To configure a full-featured broker (broker relay), you can try:
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
...
#Autowired
private RedisConnectionFactory redisConnectionFactory;
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableStompBrokerRelay("/topic", "/queue")
.setRelayHost("localhost") // broker host
.setRelayPort(61613) // broker port
;
config.setApplicationDestinationPrefixes("/app");
}
#Bean
public UserSessionRegistry userSessionRegistry() {
return new RedisUserSessionRegistry(redisConnectionFactory);
}
...
}
and
import java.util.Set;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.core.BoundHashOperations;
import org.springframework.data.redis.core.BoundSetOperations;
import org.springframework.data.redis.core.RedisOperations;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.core.StringRedisTemplate;
import org.springframework.data.redis.serializer.StringRedisSerializer;
import org.springframework.messaging.simp.user.UserSessionRegistry;
import org.springframework.util.Assert;
/**
* An implementation of {#link UserSessionRegistry} backed by Redis.
* #author thanh
*/
public class RedisUserSessionRegistry implements UserSessionRegistry {
/**
* The prefix for each key of the Redis Set representing a user's sessions. The suffix is the unique user id.
*/
static final String BOUNDED_HASH_KEY_PREFIX = "spring:websockets:users:";
private final RedisOperations<String, String> sessionRedisOperations;
#SuppressWarnings("unchecked")
public RedisUserSessionRegistry(RedisConnectionFactory redisConnectionFactory) {
this(createDefaultTemplate(redisConnectionFactory));
}
public RedisUserSessionRegistry(RedisOperations<String, String> sessionRedisOperations) {
Assert.notNull(sessionRedisOperations, "sessionRedisOperations cannot be null");
this.sessionRedisOperations = sessionRedisOperations;
}
#Override
public Set<String> getSessionIds(String user) {
Set<String> entries = getSessionBoundHashOperations(user).members();
return (entries != null) ? entries : Collections.<String>emptySet();
}
#Override
public void registerSessionId(String user, String sessionId) {
getSessionBoundHashOperations(user).add(sessionId);
}
#Override
public void unregisterSessionId(String user, String sessionId) {
getSessionBoundHashOperations(user).remove(sessionId);
}
/**
* Gets the {#link BoundHashOperations} to operate on a username
*/
private BoundSetOperations<String, String> getSessionBoundHashOperations(String username) {
String key = getKey(username);
return this.sessionRedisOperations.boundSetOps(key);
}
/**
* Gets the Hash key for this user by prefixing it appropriately.
*/
static String getKey(String username) {
return BOUNDED_HASH_KEY_PREFIX + username;
}
#SuppressWarnings("rawtypes")
private static RedisTemplate createDefaultTemplate(RedisConnectionFactory connectionFactory) {
Assert.notNull(connectionFactory, "connectionFactory cannot be null");
StringRedisTemplate template = new StringRedisTemplate(connectionFactory);
template.setKeySerializer(new StringRedisSerializer());
template.setValueSerializer(new StringRedisSerializer());
template.afterPropertiesSet();
return template;
}
}
Maintain session info across multiple nodes:
Suppose we have 2 server host, backed up with load balancer.
Websockets are socket connection from browser to specific server host.eg host1
Now if host1 goes down, socket connection from load balancer - host 1 will break.
How spring will reopen same websocket connection from load balancer to host 2 ? browser should not open new websocket connection
Related
Every session data passed into the socket is broadcasted to all users since every session subscribes to the UnicastProcessor eventPublisher.
How can I send by event data to a single session id and not to all of them?
#Override
public Mono<Void> handle(WebSocketSession session) {
WebSocketMessageSubscriber subscriber = new WebSocketMessageSubscriber(eventPublisher);
session.receive()
.map(WebSocketMessage::getPayloadAsText)
.map(this::toEvent)
.subscribe(subscriber::onNext, subscriber::onError, subscriber::onComplete);
return session.send(outputEvents.map(session::textMessage));
}
My use-case requires me to include both options for broadcasting any changed state with any client to all sockets connected plus the abillity to send response to a specific client (sessionId) that send a request within a specific event
Github link
or should It be routed to 2 different handlers from the same websocket path?
note that from javascript
new WebSocket(url/path) creates a socket connection
there is no way to change the path without creating or instantiating a new WebSocket object which is not wanted.
I'm not interested in creating for every browser client 2 sockets...
so my goal is to base the server connection via 1 single websocket path
#Bean
public HandlerMapping webSocketMapping(UnicastProcessor<Event> eventPublisher, Flux<Event> events) {
Map<String, Object> map = new HashMap<>();
map.put("/websocket/chat", new ChatSocketHandler(eventPublisher, events));
SimpleUrlHandlerMapping simpleUrlHandlerMapping = new SimpleUrlHandlerMapping();
simpleUrlHandlerMapping.setUrlMap(map);
//Without the order things break :-/
simpleUrlHandlerMapping.setOrder(10);
return simpleUrlHandlerMapping;
}
if so would be glad to see an example of such solution
With servlet based web socket, it's possible because you can connect websocket to messaging brokers. Then the messaging broker will take care of sending messages to specific client.
But with webflux based websocket that spring is provided , I couldn't manage to do bring messaging brokers into action. It seems that there is no support yet for it in spring webflux.
Find a sample with servlet stack here:
https://github.com/bmd007/RealtimeNoteSharing.git
I have a Spring Boot app (Jhipster) that uses STOMP over WebSockets to communicate information from the server to users.
I recently added an ActiveMQ server to handle scaling the app horizontally, with an Amazon auto-scaling group / load-balancer.
I make use the convertAndSendToUser() method, which works on single instances of the app to locate the authenticated users' "individual queue" so only they receive the message.
However, when I launch the app behind the load balancer, I am finding that messages are only being sent to the user if the event is generated on the server that their websocket-proxy connection (to the broker) is established on?
How do I ensure the message goes through ActiveMQ to whichever instance of the app that the user is actually "connected too" regardless of which instance receives, say an HTTP Request that executes the convertAndSendToUser() event?
For reference here is my StompBrokerRelayMessageHandler:
#Bean
public AbstractBrokerMessageHandler stompBrokerRelayMessageHandler() {
StompBrokerRelayMessageHandler handler = (StompBrokerRelayMessageHandler) super.stompBrokerRelayMessageHandler();
handler.setTcpClient(new Reactor2TcpClient<>(
new StompTcpFactory(orgProperties.getAws().getAmazonMq().getStompRelayHost(),
orgProperties.getAws().getAmazonMq().getStompRelayPort(), orgProperties.getAws().getAmazonMq
().getSsl())
));
return handler;
}
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableStompBrokerRelay("/queue", "/topic")
.setSystemLogin(orgProperties.getAws().getAmazonMq().getStompRelayHostUser())
.setSystemPasscode(orgProperties.getAws().getAmazonMq().getStompRelayHostPass())
.setClientLogin(orgProperties.getAws().getAmazonMq().getStompRelayHostUser())
.setClientPasscode(orgProperties.getAws().getAmazonMq().getStompRelayHostPass());
config.setApplicationDestinationPrefixes("/app");
}
I have found the name corresponding to the queue that is generated on ActiveMQ by examining the headers in the SessionSubscribeEvent, that is generated in the listener when a user subscribes to a user-queue, as simpSessionId.
#Override
#EventListener({SessionSubscribeEvent.class})
public void onSessionSubscribeEvent(SessionSubscribeEvent event) {
log.debug("Session Subscribe Event:" +
"{}", event.getMessage().getHeaders().toString());
}
Corresponding queues' can be found in ActiveMQ, in the format: {simpDestination}-user{simpSessionId}
Could I save the sessionId in a key-value pair and just push messages onto that topic channel?
I also found some possibilities of setting ActiveMQ specific STOMP properties in the CONNECT/SUBSCRIBE frame to create durable subscribers if I set these properties will Spring than understand the routing?
client-id & subcriptionName
Modifying the MessageBrokerReigstry config resolved the issue:
config.enableStompBrokerRelay("/queue", "/topic")
.setUserDestinationBroadcast("/topic/registry.broadcast")
Based on this paragraph in the documentation section 4.4.13:
In a multi-application server scenario a user destination may remain
unresolved because the user is connected to a different server. In
such cases you can configure a destination to broadcast unresolved
messages to so that other servers have a chance to try. This can be
done through the userDestinationBroadcast property of the
MessageBrokerRegistry in Java config and the
user-destination-broadcast attribute of the message-broker element in
XML
I did not see any documentation on "why" /topic/registry.broadcast was the correct "topic" destination, but I am finding various iterations of it:
websocket sessions sample doesn't cluster.. spring-session-1.2.2
What is MultiServerUserRegistry in spring websocket?
Spring websocket - sendToUser from a cluster does not work from backup server
We have a spring application where redis cache has been implemented along with the database MySQL. Here we are using redis cache to store the temporary values for the server validations instead of hitting the database every time, hence hitting the database calls every time gets reduces system performance.
Now i explain my problem while hitting the spring boot action endpoints,
if suddenly my redis cache server stops, we would like to know how to get the notification that my redis cache server is down. So we need solution / example java application to get the notification using redis cache listener context or anything like that.
Redis doesn't work that way. In fact, no remote service will notify your application that it's down. Usually, it's the other way round: If the service you're consuming is accessed with a more or less sophisticated client, you might take advantage of the client's features.
Asynchronous clients that run I/O, or monitoring threads can help here. More specific, it depends on the client you're using with Spring Boot and Redis. Jedis is a plain client that reacts on a request basis. Lettuce allows you to register a RedisConnectionStateListener that is called on specific connection events, such as connected/disconnected:
RedisClient redisClient = …;
redisClient.addListener(new RedisConnectionStateListener() {
#Override
public void onRedisConnected(RedisChannelHandler<?, ?> redisChannelHandler) {
}
#Override
public void onRedisDisconnected(RedisChannelHandler<?, ?> redisChannelHandler) {
}
#Override
public void onRedisExceptionCaught(RedisChannelHandler<?, ?> redisChannelHandler, Throwable throwable) {
}
});
When using Spring Data Redis, retrieving the RedisClient from LettuceConnectionFactory might be a bit tricky as it is a private field. Hence it requires reflection.
We have a Spring over WebSockets connection that we're passing a CONNECT frame:
CONNECT\naccept-version:1.2\nheart-beat:10000,10000\n\n\u0000
Which the handler acknowledges, starts a new session, and than returns:
CONNECTED
version:1.2
heart-beat:0,0
However, we want the heart-beats so we can keep the WebSocket open. We're not using SockJS.
I stepped through the Spring Message Handler:
StompHeaderAccessor [headers={simpMessageType=CONNECT, stompCommand=CONNECT, nativeHeaders={accept-version=[1.2], heart-beat=[5000,0]}, simpSessionAttributes={}, simpHeartbeat=[J#5eba717, simpSessionId=46e855c9}]
After it gets the heart-beat (native header), it sets what looks like a memory address simpHeartbeat=[J#5eba717, simpSessionId=46e855c9}]
Of note, after the broker authenticates:
Processing CONNECT session=46e855c9 (the sessionId here is different than simpSessionId)?
When running earlier TRACE debugging I saw a notice "Scheduling heartbeat..." or something to that effect...though I'm not seeing it now?
Any idea what's going on?
Thanks
I have found the explanation in the documentation:
SockJS Task Scheduler stats from thread pool of the SockJS task
scheduler which is used to send heartbeats. Note that when heartbeats
are negotiated on the STOMP level the SockJS heartbeats are disabled.
Are SockJS heartbeats different than STOMP heart-beats?
Starting Spring 4.2 you can have full control, from the server side, of the heartbeat negotiation outcome using Stomp over SockJS with the built-in SimpleBroker:
public class WebSocketConfigurer extends AbstractWebSocketMessageBrokerConfigurer {
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
ThreadPoolTaskScheduler te = new ThreadPoolTaskScheduler();
te.setPoolSize(1);
te.setThreadNamePrefix("wss-heartbeat-thread-");
te.initialize();
config.enableSimpleBroker("/")
/**
* Configure the value for the heartbeat settings. The first number
* represents how often the server will write or send a heartbeat.
* The second is how often the client should write. 0 means no heartbeats.
* <p>By default this is set to "0, 0" unless the {#link #setTaskScheduler
* taskScheduler} in which case the default becomes "10000,10000"
* (in milliseconds).
* #since 4.2
*/
.setHeartbeatValue(new long[]{heartbeatServer, heartbeatClient})
.setTaskScheduler(te);
}
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint(.....)
.setAllowedOrigins(....)
.withSockJS();
}
}
Yes SockJS heartbeats are different. Fundamentally the same thing but their purpose in the SockJS protocol are to ensure that the connection doesn't look like it's "dead" in which case proxies can close it pro-actively. More generally a heartbeat allows each side to detect connectivity issues pro-actively and clean up resources.
When using STOMP and SockJS at the transport layer there is no need to have both which is why the SockJS heartbeats are turned off if STOMP heartbeats are in use. However you're not using SockJS here.
You're not showing any configuration but my guess is that you're using the built-in simple broker which does not automatically send heartbeats. When configuring it you will see an option to enable heartbeats and you also need to set a task scheduler.
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
// ...
}
#Override
public void configureMessageBroker(MessageBrokerRegistry registry) {
registry.enableStompBrokerRelay(...)
.setTaskScheduler(...)
.setHeartbeat(...);
}
}
We got same problem with Spring, Websockets, STOMP and Spring Sessions - no heartbeats and Spring session may expire while websocket doesn't receive messages on server side. We ended up with enable STOMP heartbeats from browser every 20000ms and add SimpMessageType.HEARTBEAT to Spring sessionRepositoryInterceptor matches to keep Spring session last access time updated on STOMP heartbeats without messages. We had to use AbstractSessionWebSocketMessageBrokerConfigurer as a base to enable in-build Spring session and websocket session binding. Spring manual, second example. In official example Spring session is updated on inbound websocket CONNECT/MESSAGE/SUBSCRIBE/UNSUBSCRIBE messages, but not heartbeats, that's why we need to re-configure 2 things - enable at least inbound heartbeats and adjust Spring session to react to websocket heartbeats
public class WebSocketConfig extends AbstractSessionWebSocketMessageBrokerConfigurer<ExpiringSession> {
#Autowired
SessionRepositoryMessageInterceptor sessionRepositoryInterceptor;
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
sessionRepositoryInterceptor.setMatchingMessageTypes(EnumSet.of(SimpMessageType.CONNECT,
SimpMessageType.MESSAGE, SimpMessageType.SUBSCRIBE,
SimpMessageType.UNSUBSCRIBE, SimpMessageType.HEARTBEAT));
config.setApplicationDestinationPrefixes(...);
config.enableSimpleBroker(...)
.setTaskScheduler(new DefaultManagedTaskScheduler())
.setHeartbeatValue(new long[]{0,20000});
}
}
Another way we tried is some re-implementing of SessionRepositoryMessageInterceptor functionality to update Spring sessions last access time on outbound websocket messages plus maintain websocket session->Spring session map via listeners, but code above did the trick.
I have a (legacy) TCP service that has multiple processes. Each process runs on the same host, but on a different port. The service is single threaded, so the way to increase throughput is to round-robin each request across each of the ports.
I am providing an AMQP exposure to this legacy application. Its very simple - take a string off the AMQP queue, pass it to the application, and return the response string to the AMQP reply queue.
This works great on a single port. However, i'd like to fan out the requests across all the ports.
Spring Integration seems to only provide AbstractClientConnectionFactory implementations that either connect directly to a single host/port (TcpNetClientConnectionFactory) or maintain a pool of connections to a single host/port (CachingClientConnectionFactory). There arent any that pool connections between a single host and multiple ports.
I have attempted to write my own AbstractClientConnectionFactory that maintains a pool of AbstractClientConnectionFactory objects and round-robins between them. However, I have struck several issues to do with handing the TCP connections when the target service goes away or the network is interrupted that I have not been able to solve.
There is also the approach taken by this question: Spring Integration 4 - configuring a LoadBalancingStrategy in Java DSL but the solution to that was to hardcode the number of endpoints. In my case, the number of endpoints is only known at runtime and is a user-configurable setting.
So, basically I need to create a TcpOutboundGateway per port dynamically at runtime and somehow register it in my IntegrationFlow. I have attempted the following:
#Bean
public IntegrationFlow xmlQueryWorkerIntegrationFlow() {
SimpleMessageListenerContainer inboundQueue = getMessageListenerContainer();
DirectChannel rabbitReplyChannel = MessageChannels.direct().get();
IntegrationFlowBuilder builder = IntegrationFlows
.from(Amqp.inboundGateway(inboundQueue)
.replyChannel(rabbitReplyChannel))
/* SOMEHOW DO THE ROUND ROBIN HERE */
//I have tried:
.channel(handlerChannel()) //doesnt work, the gateways dont get started and the message doesnt get sent to the gateway
//and I have also tried:
.handle(gateway1)
.handle(gateway2) //doesnt work, it chains the handlers instead of round-robining between them
//
.transform(new ObjectToStringTransformer())
.channel(rabbitReplyChannel);
return builder.get();
}
#Bean
//my attempt at dynamically adding handlers to the same channel and load balancing between them
public DirectChannel handlerChannel() {
DirectChannel channel = MessageChannels.direct().loadBalancer(new RoundRobinLoadBalancingStrategy()).get();
for (AbstractClientConnectionFactory factory : generateConnections()) {
channel.subscribe(generateTcpOutboundGateway(factory));
}
return channel;
}
Does anyone know how I can solve this problem?
See the dynamic ftp sample - in essence each outbound gateway goes in its own application context and the dynamic router routes to the appropriate channel (for which the outbound adapter is created on demand if necessary).
Although the sample uses XML, you can do the same thing with java configuration, or even with the Java DSL.
See my answer to a similar question for multiple IMAP mail adapters using Java configuration and then a follow-up question.