How can I manage sessions with stomp protocol? - spring

I am developing websocket in spring boot, react environment.
I am using stomp protocol and I am storing the session in concurrentHashMap().
This is because if there is no subscribing session at the subscription point, data transmission stops, and when there is more than one, data must be periodically transmitted (published).
public final static Map<String, String> subscribeSessions = new ConcurrentHashMap<>();
#Override
public void postSend(Message<?> message, MessageChannel channel, boolean sent) {
StompHeaderAccessor accessor = StompHeaderAccessor.wrap(message);
String sessionId = accessor.getSessionId();
switch (accessor.getCommand()) {
case CONNECT:
break;
case DISCONNECT:
break;
case SUBSCRIBE:
if(!accessor.getDestination().contains("/user/queue")) {
subscribeSessions.put(accessor.getSessionId(),
accessor.getDestination());
}
break;
case UNSUBSCRIBE:
subscribeSessions.remove(accessor.getSessionId());
default:
break;
}
}
I tried to store it in hashMap as below in that one user can subscribe to multiple subscription points as shown below.
Session A - /Topic/ex
Session A - /topic/ex/2
However, this is not possible because hashmap does not allow duplicate keys.
Is there any way to solve this?
Or is there a way to check how many people are subscribing to subscription points?

Related

Spring cloud function Function interface return success/failure handling

I currently have a spring cloud stream application that has a listener function that mainly listens to a certain topic and executes the following in sequence:
Consume messages from a topic
Store consumed message in the DB
Call an external service for some information
Process the data
Record the results in DB
Send the message to another topic
Acknowledge the message (I have the acknowledge mode set to manual)
We have decided to move to Spring cloud function, and I have been already able to already do almost all the steps above using the Function interface, with the source topic as input and the sink topic as an output.
#Bean
public Function<Message<NotificationMessage>, Message<ValidatedEvent>> validatedProducts() {
return message -> {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
return MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
}
}
My problem goes with exception handling in step 7 (Acknowledge the message). We only acknowledge the message if we are sure that it was sent successfully to the sink queue, otherwise we do no acknowledge the message.
My question is, how can such a thing be implemented within Spring cloud function, specially that the send method is fully dependant on the Spring Framework (as the result of the function interface implementation evaluation).
earlier, we could do this through try/catch
#StreamListener(value = NotificationMesage.INPUT)
public void onMessage(Message<NotificationMessage> message) {
try {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
Message message = MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
kafkaTemplate.send(message);
notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
}catch (Exception exception){
notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
}
}
Is there a listener that triggers after the Function interface have returned successfully, something like KafkaSendCallback but without specifying a template
Building upon what Oleg mentioned above, if you want to strictly restore the behavior in your StreamListener code, here is something you can try. Instead of using a function, you can switch to a consumer and then use KafkaTemplate to send on the outbound as you had previously.
#Bean
public Consumer<Message<NotificationMessage>> validatedProducts() {
return message -> {
try{
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
Message message = MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
kafkaTemplate.send(message); //here, you make sure that the data was sent successfully by using some callback.
//only ack if the data was sent successfully.
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
}
catch (Exception exception){
notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
}
};
}
Another thing that is worth looking into is using Kafka transactions, in which case if it doesn't work end-to-end, no acknowledgment will happen. Spring Cloud Stream binder has support for this based on the foundations in Spring for Apache Kafka. More details here. Here is the Spring Cloud Stream doc on this.
Spring cloud stream has no knowledge of function. It is just the same message handler as it was before, so the same approach with callback as you used before would work with functions. So perhaps you can share some code that could clarify what you mean? I also don't understand what do you mean by ..send method is fully dependant on the Spring Framework..
Alright, So what I opted in was actually not to use KafkaTemplate (Or streamBridge)for that matter. While it is a feasible solution it would mean that my Function is going to be split into Consumer and some sort of an improvised supplied (the KafkaTemplate in this case).
As I wanted to adhere to the design goals of the functional interface, I have isolated the behaviour for Database update in a ProducerListener interface implementation
#Configuration
public class ProducerListenerConfiguration {
private final MongoTemplate mongoTemplate;
public ProducerListenerConfiguration(MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#Bean
public ProducerListener myProducerListener() {
return new ProducerListener() {
#SneakyThrows
#Override
public void onSuccess(ProducerRecord producerRecord, RecordMetadata recordMetadata) {
final ValidatedEvent event = new ObjectMapper().readerFor(ValidatedEvent.class).readValue((byte[]) producerRecord.value());
final var updateResult = updateDocumentProcessedState(event.getKey(), event.getPayload().getVersion(), true);
}
#SneakyThrows
#Override
public void onError(ProducerRecord producerRecord, #Nullable RecordMetadata recordMetadata, Exception exception) {
ProducerListener.super.onError(producerRecord, recordMetadata, exception);
}
};
}
public UpdateResult updateDocumentProcessedState(String id, long version, boolean isProcessed) {
Query query = new Query();
query.addCriteria(Criteria.where("_id").is(id));
Update update = new Update();
update.set("processed", isProcessed);
update.set("version", version);
return mongoTemplate.updateFirst(query, update, ProductChangedEntity.class);
}
}
Then with each successful attempt, the DB is updated with the processing result and the updated version number.

Replay Kafka topic with Server-Sent-Events

I'm thinking about the following use-case and would like to validate if this approach is conceptually valid.
The goal is to expose a long-running Server-Sent-Event (SSE) endpoint in Spring replaying the same Kafka topic for each incoming connection (with some user-specific filtering).
The SSE is exposed in this way:
#GetMapping("/sse")
public SseEmitter sse() {
SseEmitter sseEmitter = new SseEmitter();
Executors
.newSingleThreadExecutor()
.execute(() -> dummyDataProducer.generate() // kafka ultimately
.forEach(payload -> {
try {
sseEmitter.send(payload);
} catch (IOException ex) {
sseEmitter.completeWithError(ex);
}
}));
return sseEmitter;
}
From the other side, there is a KafkaListener method (ConcurrentKafkaListenerContainerFactory is used) :
#KafkaListener(topics = "${app.kafka.topic1}")
public void receive(
#Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) Integer id,
#Payload Object payload) {
// do something ...
}
As far as I know, the Kafka consumer application uses one thread for reading data from a single topic. This somehow violates the idea of using SSE, where for each incoming connection a dedicated long-running thread is created.
Is it a valid approach for this use-case? If so, how to accomplish this properly?

How to limit the number of stomp clients in Spring, subscribing to a specific topic, based on a condition?

I have been researching for a way to limit the number of clients who can subscribe to a specific stomp topic but have not yet understood, which could be the right approach according to my needs.
My use case is a game, which I am developing in Angular (ng2-stompjs stomp client) and Spring Boot Websockets (for the moment, the Spring in-memory message broker is in use).
The idea is that a user can be connected and subscribed to a "/lobby" stomp topic, and there he sees the opened game rooms, that could be in different statuses. for example, in-play or not started yet due to the low number of players joined.
I'd like to intercept and programmatically restrict a possible subscription of a client, to a specific "/room/{roomId}" topic, IF the MAX number of players has been reached, for example, 4. There could also be some simple client-side validation to restrict that, but I believe only client-side is not sufficient
So my main questions are:
How can a specific stomp topic subscription be intercepted in Spring?
Is it possible to return to the client-requestor some kind of error message that subscription could not be done?
I'd really appreciate your help, thank you in advance!
You could implement a StompEventListener which listens for subscriptions, in this we can have map mapping a destination(room number) versus the count of number of players in that particular room. if the count is already at max reject the subscription.
#Service
class StompEventListener() {
private Map<String, int> roomIdVsPlayerCount = new HashMap<>();
#EventListener
public void handleSubscription(SessionSubscribe event) {
StompHeaderAccessor accessor = StompHeaderAccessor.wrap(event.getMessage());
String destination = accessor.getDestination();
String roomId = destination.substring(...); //Parsed RoomID
if(roomIdVsPlayerCount.get(roomId) == MAX_ALLOWED_PLAYERS) {
//Throw exception which will terminate that client connection
or, send an error message like so:
simpMessagingTemplate.convertAndSend(<some_error_message>);
return;
}
//So it is not at maximum do further logic to actually subscribe
user and
roomIdVsPlayerCount.get(roomId) += 1;
}
#EventListener
public void handleUnsubscription(SessionUnsubscribe event) {
...
}
}
Useful References:
SessionSubscribeEvent (For handling the subscriptions)
ConvertAndSend. (For sending the error messages to client.)
EDIT
Please try sending the exception from a channel Interceptor since the above did not send the exception , so that it gets propagated to the client. The map we defined earlier can be defined as a bean in a separate class accessible(with #Autowired) to both event handler(for incrementing and decrementing) and TopicSubscriptionInterceptor(for validation).
#Component
class TopicSubscriptionInterceptor implements ChannelInterceptor {
#Override
public Message<?> preSend(Message<?> message, MessageChannel channel){
StompHeaderAccessor accessor = StompHeaderAccessor.wrap(message);
String destination = accessor.getDestination();
String roomId = destination.substring(...); //Parsed RoomID
if(roomIdVsPlayerCount.get(roomId) == MAX_ALLOWED_PLAYERS) {
//Throw exception which will terminate that client connection
}
//Since it is not at limit continue
}
}
Useful reference for implementing a TopicSubscriptionInterceptor: TopicSubscriptionInterceptor

Spring Cloud Stream RabbitMQ

I am trying to understand why I would want to use Spring cloud stream with RabbitMQ. I've had a look at the RabbitMQ Spring tutorial 4 (https://www.rabbitmq.com/tutorials/tutorial-four-spring-amqp.html) which is basically what I want to do. It creates a direct exchange with 2 queues attached and depending on the routing key a message is either routed to Q1 or to Q2.
The whole process is pretty straight forward if you look at the tutorial, you create all the parts, bind them together and youre ready to go.
I was wondering what benefit I would gain in using Sing Cloud Stream and if that is even the use case for it. It was easy to create a simple exchange and even defining destination and group was straight forward with stream. So I thought why not go further and try to handle the tutorial case with stream.
I have seen that Stream has a BinderAwareChannelResolver which seems to do the same thing. But I am struggling to put it all together to achieve the same as in the RabbitMQ Spring tutorial. I am not sure if it is a dependency issue, but I seem to misunderstand something fundamentally here, I thought something like:
spring.cloud.stream.bindings.output.destination=myDestination
spring.cloud.stream.bindings.output.group=consumerGroup
spring.cloud.stream.rabbit.bindings.output.producer.routing-key-expression='key'
should to the trick.
Is there anyone with a minimal example for a source and sink which basically creates a direct exchange, binds 2 queues to it and depending on routing key routes to either one of those 2 queues like in https://www.rabbitmq.com/tutorials/tutorial-four-spring-amqp.html?
EDIT:
Below is a minimal set of code which demonstrates how to do what I asked. I did not attach the build.gradle as it is straight forward (but if anyone is interested, let me know)
application.properties: setup the producer
spring.cloud.stream.bindings.output.destination=tut.direct
spring.cloud.stream.rabbit.bindings.output.producer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.output.producer.routing-key-expression=headers.type
Sources.class: setup the producers channel
public interface Sources {
String OUTPUT = "output";
#Output(Sources.OUTPUT)
MessageChannel output();
}
StatusController.class: Respond to rest calls and send message with specific routing keys
/**
* Status endpoint for the health-check service.
*/
#RestController
#EnableBinding(Sources.class)
public class StatusController {
private int index;
private int count;
private final String[] keys = {"orange", "black", "green"};
private Sources sources;
private StatusService status;
#Autowired
public StatusController(Sources sources, StatusService status) {
this.sources = sources;
this.status = status;
}
/**
* Service available, service returns "OK"'.
* #return The Status of the service.
*/
#RequestMapping("/status")
public String status() {
String status = this.status.getStatus();
StringBuilder builder = new StringBuilder("Hello to ");
if (++this.index == 3) {
this.index = 0;
}
String key = keys[this.index];
builder.append(key).append(' ');
builder.append(Integer.toString(++this.count));
String payload = builder.toString();
log.info(payload);
// add kv pair - routingkeyexpression (which matches 'type') will then evaluate
// and add the value as routing key
Message<String> msg = new GenericMessage<>(payload, Collections.singletonMap("type", key));
sources.output().send(msg);
// return rest call
return status;
}
}
consumer side of things, properties:
spring.cloud.stream.bindings.input.destination=tut.direct
spring.cloud.stream.rabbit.bindings.input.consumer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.input.consumer.bindingRoutingKey=orange
spring.cloud.stream.bindings.inputer.destination=tut.direct
spring.cloud.stream.rabbit.bindings.inputer.consumer.exchangeType=direct
spring.cloud.stream.rabbit.bindings.inputer.consumer.bindingRoutingKey=black
Sinks.class:
public interface Sinks {
String INPUT = "input";
#Input(Sinks.INPUT)
SubscribableChannel input();
String INPUTER = "inputer";
#Input(Sinks.INPUTER)
SubscribableChannel inputer();
}
ReceiveStatus.class: Receive the status:
#EnableBinding(Sinks.class)
public class ReceiveStatus {
#StreamListener(Sinks.INPUT)
public void receiveStatusOrange(String msg) {
log.info("I received a message. It was orange number: {}", msg);
}
#StreamListener(Sinks.INPUTER)
public void receiveStatusBlack(String msg) {
log.info("I received a message. It was black number: {}", msg);
}
}
Spring Cloud Stream lets you develop event driven micro service applications by enabling the applications to connect (via #EnableBinding) to the external messaging systems using the Spring Cloud Stream Binder implementations (Kafka, RabbitMQ, JMS binders etc.,). Apparently, Spring Cloud Stream uses Spring AMQP for the RabbitMQ binder implementation.
The BinderAwareChannelResolver is applicable for dynamically binding support for the producers and I think in your case it is about configuring the exchanges and binding of consumers to that exchange.
For instance, you need to have 2 consumers with the appropriate bindingRoutingKey set based on your criteria and a single producer with the properties(routing-key-expression, destination) you mentioned above (except the group). I noticed that you have configured group for the outbound channel. The group property is applicable only for the consumers (hence inbound).
You might also want to check this one: https://github.com/spring-cloud/spring-cloud-stream-binder-rabbit/issues/57 as I see some discussion around using routing-key-expression. Specifically, check this one on using the expression value.

Stomp over websocket using Spring and sockJS message lost

On the client side javascript I have
stomp.subscribe("/topic/path", function (message) {
console.info("message received");
});
And on the server side
public class Controller {
private final MessageSendingOperations<String> messagingTemplate;
ï¼ Autowired
public Controller(MessageSendingOperations<String> messagingTemplate) {
this.messagingTemplate = messagingTemplate;
}
#SubscribeMapping("/topic/path")
public void subscribe() {
LOGGER.info("before send");
messagingTemplate.convertAndSend(/topic/path, "msg");
}
}
From this setup, I am occasionally (around once in 30 page refreshes) experiencing message dropping, which means I can see neither "message received" msg on the client side nor the websocket traffic from Chrome debugging tool.
"before send" is always logged on the server side.
This looks like that the MessageSendingOperations is not ready when I call it in the subscribe() method. (if I put Thread.sleep(50); before calling messagingTemplate.convertAndSend the problem would disappear (or much less likely to be reproduced))
I wonder if anyone experienced the same before and if there is an event that can tell me MessageSendingOperations is ready or not.
The issue you are facing is laying in the nature of clientInboundChannel which is ExecutorSubscribableChannel by default.
It has 3 subscribers:
0 = {SimpleBrokerMessageHandler#5276} "SimpleBroker[DefaultSubscriptionRegistry[cache[0 destination(s)], registry[0 sessions]]]"
1 = {UserDestinationMessageHandler#5277} "UserDestinationMessageHandler[DefaultUserDestinationResolver[prefix=/user/]]"
2 = {SimpAnnotationMethodMessageHandler#5278} "SimpAnnotationMethodMessageHandler[prefixes=[/app/]]"
which are invoked within taskExecutor, hence asynchronously.
The first one here (SimpleBrokerMessageHandler (or StompBrokerRelayMessageHandler) if you use broker-relay) is responsible to register subscription for the topic.
Your messagingTemplate.convertAndSend(/topic/path, "msg") operation may be performed before the subscription registration for that WebSocket session, because they are performed in the separate threads. Hence the Broker handler doesn't know you to send the message to the session.
The #SubscribeMapping can be configured on method with return, where the result of this method will be sent as a reply to that subscription function on the client.
HTH
Here is my solution. It is along the same lines. Added a ExecutorChannelInterceptor and published a custom SubscriptionSubscribedEvent. The key is to publish the event after the message has been handled by AbstractBrokerMessageHandler which means the subscription has been registered with the broker.
#Override
public void configureClientInboundChannel(ChannelRegistration registration) {
registration.interceptors(new ExecutorChannelInterceptorAdapter() {
#Override
public void afterMessageHandled(Message<?> message, MessageChannel channel, MessageHandler handler, Exception ex) {
SimpMessageHeaderAccessor accessor = SimpMessageHeaderAccessor.wrap(message);
if (accessor.getMessageType() == SimpMessageType.SUBSCRIBE && handler instanceof AbstractBrokerMessageHandler) {
/*
* Publish a new session subscribed event AFTER the client
* has been subscribed to the broker. Before spring was
* publishing the event after receiving the message but not
* necessarily after the subscription occurred. There was a
* race condition because the subscription was being done on
* a separate thread.
*/
applicationEventPublisher.publishEvent(new SessionSubscribedEvent(this, message));
}
}
});
}
A little late but I thought I'd add my solution. I was having the same problem with the subscription not being registered before I was sending data through the messaging template. This issue happened rarely and unpredictable because of the race with the DefaultSubscriptionRegistry.
Unfortunately, I could not just use the return method of the #SubscriptionMapping because we were using a custom object mapper that changed dynamically based on the type of user (attribute filtering essentially).
I searched through the Spring code and found SubscriptionMethodReturnValueHandler was responsible for sending the return value of subscription mappings and had a different messagingTemplate than the autowired SimpMessagingTemplate of my async controller!!
So the solution was autowiring MessageChannel clientOutboundChannel into my async controller and using that to create a SimpMessagingTemplate. (You can't directly wire it in because you'll just get the template going to the broker).
In subscription methods, I then used the direct template while in other methods I used the template that went to the broker.

Resources