Stomp over websocket using Spring and sockJS message lost - spring

On the client side javascript I have
stomp.subscribe("/topic/path", function (message) {
console.info("message received");
});
And on the server side
public class Controller {
private final MessageSendingOperations<String> messagingTemplate;
ï¼ Autowired
public Controller(MessageSendingOperations<String> messagingTemplate) {
this.messagingTemplate = messagingTemplate;
}
#SubscribeMapping("/topic/path")
public void subscribe() {
LOGGER.info("before send");
messagingTemplate.convertAndSend(/topic/path, "msg");
}
}
From this setup, I am occasionally (around once in 30 page refreshes) experiencing message dropping, which means I can see neither "message received" msg on the client side nor the websocket traffic from Chrome debugging tool.
"before send" is always logged on the server side.
This looks like that the MessageSendingOperations is not ready when I call it in the subscribe() method. (if I put Thread.sleep(50); before calling messagingTemplate.convertAndSend the problem would disappear (or much less likely to be reproduced))
I wonder if anyone experienced the same before and if there is an event that can tell me MessageSendingOperations is ready or not.

The issue you are facing is laying in the nature of clientInboundChannel which is ExecutorSubscribableChannel by default.
It has 3 subscribers:
0 = {SimpleBrokerMessageHandler#5276} "SimpleBroker[DefaultSubscriptionRegistry[cache[0 destination(s)], registry[0 sessions]]]"
1 = {UserDestinationMessageHandler#5277} "UserDestinationMessageHandler[DefaultUserDestinationResolver[prefix=/user/]]"
2 = {SimpAnnotationMethodMessageHandler#5278} "SimpAnnotationMethodMessageHandler[prefixes=[/app/]]"
which are invoked within taskExecutor, hence asynchronously.
The first one here (SimpleBrokerMessageHandler (or StompBrokerRelayMessageHandler) if you use broker-relay) is responsible to register subscription for the topic.
Your messagingTemplate.convertAndSend(/topic/path, "msg") operation may be performed before the subscription registration for that WebSocket session, because they are performed in the separate threads. Hence the Broker handler doesn't know you to send the message to the session.
The #SubscribeMapping can be configured on method with return, where the result of this method will be sent as a reply to that subscription function on the client.
HTH

Here is my solution. It is along the same lines. Added a ExecutorChannelInterceptor and published a custom SubscriptionSubscribedEvent. The key is to publish the event after the message has been handled by AbstractBrokerMessageHandler which means the subscription has been registered with the broker.
#Override
public void configureClientInboundChannel(ChannelRegistration registration) {
registration.interceptors(new ExecutorChannelInterceptorAdapter() {
#Override
public void afterMessageHandled(Message<?> message, MessageChannel channel, MessageHandler handler, Exception ex) {
SimpMessageHeaderAccessor accessor = SimpMessageHeaderAccessor.wrap(message);
if (accessor.getMessageType() == SimpMessageType.SUBSCRIBE && handler instanceof AbstractBrokerMessageHandler) {
/*
* Publish a new session subscribed event AFTER the client
* has been subscribed to the broker. Before spring was
* publishing the event after receiving the message but not
* necessarily after the subscription occurred. There was a
* race condition because the subscription was being done on
* a separate thread.
*/
applicationEventPublisher.publishEvent(new SessionSubscribedEvent(this, message));
}
}
});
}

A little late but I thought I'd add my solution. I was having the same problem with the subscription not being registered before I was sending data through the messaging template. This issue happened rarely and unpredictable because of the race with the DefaultSubscriptionRegistry.
Unfortunately, I could not just use the return method of the #SubscriptionMapping because we were using a custom object mapper that changed dynamically based on the type of user (attribute filtering essentially).
I searched through the Spring code and found SubscriptionMethodReturnValueHandler was responsible for sending the return value of subscription mappings and had a different messagingTemplate than the autowired SimpMessagingTemplate of my async controller!!
So the solution was autowiring MessageChannel clientOutboundChannel into my async controller and using that to create a SimpMessagingTemplate. (You can't directly wire it in because you'll just get the template going to the broker).
In subscription methods, I then used the direct template while in other methods I used the template that went to the broker.

Related

Difference between DirectChannel and FluxMessageChannel

I was reading about Spring Integration's FluxMessageChannel here and here, but I still don't understand exactly what are the differences between using a DirectChannel and FluxMessageChannel when using Project Reactor. Since the DirectChannel is stateless and controlled by its pollers, I'd expect the FluxMessageChannel to not be needed. I'm trying to understand when exactly should I use each and why, when speaking on Reactive Streams applications that are implemented with Spring Integration.
I currently have a reactive project that uses DirectChannel, and it seems to work fine, even the documentation says:
the flow behavior is changed from an imperative push model to a reactive pull model
I'd like to understand when to use each of the channels and what is the exact difference when working with Reactive Streams.
The DirectChannel does not have any poller and its implementation is very simple: as long as a message is sent to it, the handler is called. In the same caller's thread:
public class DirectChannel extends AbstractSubscribableChannel {
private final UnicastingDispatcher dispatcher = new UnicastingDispatcher();
private volatile Integer maxSubscribers;
/**
* Create a channel with default {#link RoundRobinLoadBalancingStrategy}.
*/
public DirectChannel() {
this(new RoundRobinLoadBalancingStrategy());
}
Where that UnicastingDispatcher is:
public final boolean dispatch(final Message<?> message) {
if (this.executor != null) {
Runnable task = createMessageHandlingTask(message);
this.executor.execute(task);
return true;
}
return this.doDispatch(message);
}
(There is no executor option for the DirectChannel)
private boolean doDispatch(Message<?> message) {
if (tryOptimizedDispatch(message)) {
return true;
}
...
protected boolean tryOptimizedDispatch(Message<?> message) {
MessageHandler handler = this.theOneHandler;
if (handler != null) {
try {
handler.handleMessage(message);
return true;
}
catch (Exception e) {
throw IntegrationUtils.wrapInDeliveryExceptionIfNecessary(message,
() -> "Dispatcher failed to deliver Message", e);
}
}
return false;
}
That's why I call it " imperative push model". The caller is this case is going to wait until the handler finishes its job. And if you have a big flow, everything is going to be stopped in the sender thread until a sent message has reached the end of the flow of direct channels. In two simple words: the publisher is in charge for the whole execution and it is blocked in this case. You haven't faced any problems with your solution based on the DirectChannel just because you didn't use reactive non-blocking threads yet like Netty in WebFlux or MongoDB reactive driver.
The FluxMessageChannel was really designed for Reactive Streams purposes where the subscriber is in charge for handling a message which it pulls from the Flux on demand. This way just after sending the publisher is free to do anything else. Just because it is already a subscriber responsibility to handle the message.
I would say it is definitely OK to use DirectChannel as long as your handlers are not blocking. As long as they are blocking you should go with FluxMessageChannel. Although don't forget that there are other channel types for different tasks: https://docs.spring.io/spring-integration/docs/current/reference/html/core.html#channel-implementations

Message are not commited (loss) when using #TransactionalEventListener to send a message in a JPA Transaction

Background of the code:
In order to replicate a production scenario, I have created a dummy app that will basically save something in DB in a transaction, and in the same method, it publishEvent and publishEvent send a message to rabbitMQ.
Classes and usages
Transaction Starts from this method.:
#Override
#Transactional
public EmpDTO createEmployeeInTrans(EmpDTO empDto) {
return createEmployee(empDto);
}
This method saves the record in DB and also triggers publishEvent
#Override
public EmpDTO createEmployee(EmpDTO empDTO) {
EmpEntity empEntity = new EmpEntity();
BeanUtils.copyProperties(empDTO, empEntity);
System.out.println("<< In Transaction : "+TransactionSynchronizationManager.getCurrentTransactionName()+" >> Saving data for employee " + empDTO.getEmpCode());
// Record data into a database
empEntity = empRepository.save(empEntity);
// Sending event , this will send the message.
eventPublisher.publishEvent(new ActivityEvent(empDTO));
return createResponse(empDTO, empEntity);
}
This is ActivityEvent
import org.springframework.context.ApplicationEvent;
import com.kuldeep.rabbitMQProducer.dto.EmpDTO;
public class ActivityEvent extends ApplicationEvent {
public ActivityEvent(EmpDTO source) {
super(source);
}
}
And this is TransactionalEventListener for the above Event.
//#Transactional(propagation = Propagation.REQUIRES_NEW)
#TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT)
public void onActivitySave(ActivityEvent activityEvent) {
System.out.println("Activity got event ... Sending message .. ");
kRabbitTemplate.convertAndSend(exchange, routingkey, empDTO);
}
This is kRabbitTemplate is a bean config like this :
#Bean
public RabbitTemplate kRabbitTemplate(ConnectionFactory connectionFactory) {
final RabbitTemplate kRabbitTemplate = new RabbitTemplate(connectionFactory);
kRabbitTemplate.setChannelTransacted(true);
kRabbitTemplate.setMessageConverter(kJsonMessageConverter());
return kRabbitTemplate;
}
Problem Definition
When I am saving a record and sending a message on rabbitMQ using the above code flow, My messages are not delivered on the server means they lost.
What I understand about the transaction in AMQP is :
If the template is transacted, but convertAndSend is not called from Spring/JPA Transaction then messages are committed within the template's convertAndSend method.
// this is a snippet from org.springframework.amqp.rabbit.core.RabbitTemplate.doSend()
if (isChannelLocallyTransacted(channel)) {
// Transacted channel created by this template -> commit.
RabbitUtils.commitIfNecessary(channel);
}
But if the template is transacted and convertAndSend is called from Spring/JPA Transaction then this isChannelLocallyTransacted in doSend method will evaluate false and commit will be done in the method which initiated Spring/JPA Transaction.
What I found after investigating the reason for message loss in my above code.
Spring transaction was active when I called convertAndSend method, so it was supposed to commit the message in Spring transaction.
For that, RabbitTemplate binds the resources and registers the Synchronizations before sending the message in bindResourceToTransaction of org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.
public static RabbitResourceHolder bindResourceToTransaction(RabbitResourceHolder resourceHolder,
ConnectionFactory connectionFactory, boolean synched) {
if (TransactionSynchronizationManager.hasResource(connectionFactory)
|| !TransactionSynchronizationManager.isActualTransactionActive() || !synched) {
return (RabbitResourceHolder) TransactionSynchronizationManager.getResource(connectionFactory); // NOSONAR never null
}
TransactionSynchronizationManager.bindResource(connectionFactory, resourceHolder);
resourceHolder.setSynchronizedWithTransaction(true);
if (TransactionSynchronizationManager.isSynchronizationActive()) {
TransactionSynchronizationManager.registerSynchronization(new RabbitResourceSynchronization(resourceHolder,
connectionFactory));
}
return resourceHolder;
}
In my code, after resource bind, it is not able to registerSynchronization because TransactionSynchronizationManager.isSynchronizationActive()==false. and since it fails to registerSynchronization, spring commit did not happen for the rabbitMQ message as AbstractPlatformTransactionManager.triggerAfterCompletion calls RabbitMQ's commit for each synchronization.
What problem I faced because of the above issue.
Message was not committed in the spring transaction, so the message lost.
As resource was added in bindResourceToTransaction, this resource remained bind and did not let add the resource for any other message to send in the same thread.
Possible Root Cause of TransactionSynchronizationManager.isSynchronizationActive()==false
I found the method which starts the transaction removed the synchronization in triggerAfterCompletion of org.springframework.transaction.support.AbstractPlatformTransactionManager class. because status.isNewSynchronization() evaluated true after DB opertation (this usually not happens if I call convertAndSend without ApplicationEvent).
private void triggerAfterCompletion(DefaultTransactionStatus status, int completionStatus) {
if (status.isNewSynchronization()) {
List<TransactionSynchronization> synchronizations = TransactionSynchronizationManager.getSynchronizations();
TransactionSynchronizationManager.clearSynchronization();
if (!status.hasTransaction() || status.isNewTransaction()) {
if (status.isDebug()) {
logger.trace("Triggering afterCompletion synchronization");
}
// No transaction or new transaction for the current scope ->
// invoke the afterCompletion callbacks immediately
invokeAfterCompletion(synchronizations, completionStatus);
}
else if (!synchronizations.isEmpty()) {
// Existing transaction that we participate in, controlled outside
// of the scope of this Spring transaction manager -> try to register
// an afterCompletion callback with the existing (JTA) transaction.
registerAfterCompletionWithExistingTransaction(status.getTransaction(), synchronizations);
}
}
}
What I Did to overcome on this issue
I simply added #Transactional(propagation = Propagation.REQUIRES_NEW) along with on #TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT) in onActivitySave method and it worked as a new transaction was started.
What I need to know
Why this status.isNewSynchronization in triggerAfterCompletion method when using ApplicationEvent?
If the transaction was supposed to terminate in the parent method, why I got TransactionSynchronizationManager.isActualTransactionActive()==true in Listner class?
If Actual Transaction Active, was it supposed to remove the synchronization?
In bindResourceToTransaction, do spring AMQP assumed an active transaction without synchronization? if the answer is yes, why not to synchronization. init if it is not activated?
If I am propagating a new transaction then I am losing the parent transaction, is there any better way to do it?
Please help me on this, it is a hot production issue, and I am not very sure about the fix I have done.
This is a bug; the RabbitMQ transaction code pre-dated the #TransactionalEventListener code, by many years.
The problem is, with this configuration, we are in a quasi-transactional state, while there is indeed a transaction in process, the synchronizations are already cleared because the transaction has already committed.
Using #TransactionalEventListener(phase = TransactionPhase.BEFORE_COMMIT) works.
I see you already raised an issue:
https://github.com/spring-projects/spring-amqp/issues/1309
In future, it's best to ask questions here, or raise an issue if you feel there is a bug. Don't do both.

How to limit the number of stomp clients in Spring, subscribing to a specific topic, based on a condition?

I have been researching for a way to limit the number of clients who can subscribe to a specific stomp topic but have not yet understood, which could be the right approach according to my needs.
My use case is a game, which I am developing in Angular (ng2-stompjs stomp client) and Spring Boot Websockets (for the moment, the Spring in-memory message broker is in use).
The idea is that a user can be connected and subscribed to a "/lobby" stomp topic, and there he sees the opened game rooms, that could be in different statuses. for example, in-play or not started yet due to the low number of players joined.
I'd like to intercept and programmatically restrict a possible subscription of a client, to a specific "/room/{roomId}" topic, IF the MAX number of players has been reached, for example, 4. There could also be some simple client-side validation to restrict that, but I believe only client-side is not sufficient
So my main questions are:
How can a specific stomp topic subscription be intercepted in Spring?
Is it possible to return to the client-requestor some kind of error message that subscription could not be done?
I'd really appreciate your help, thank you in advance!
You could implement a StompEventListener which listens for subscriptions, in this we can have map mapping a destination(room number) versus the count of number of players in that particular room. if the count is already at max reject the subscription.
#Service
class StompEventListener() {
private Map<String, int> roomIdVsPlayerCount = new HashMap<>();
#EventListener
public void handleSubscription(SessionSubscribe event) {
StompHeaderAccessor accessor = StompHeaderAccessor.wrap(event.getMessage());
String destination = accessor.getDestination();
String roomId = destination.substring(...); //Parsed RoomID
if(roomIdVsPlayerCount.get(roomId) == MAX_ALLOWED_PLAYERS) {
//Throw exception which will terminate that client connection
or, send an error message like so:
simpMessagingTemplate.convertAndSend(<some_error_message>);
return;
}
//So it is not at maximum do further logic to actually subscribe
user and
roomIdVsPlayerCount.get(roomId) += 1;
}
#EventListener
public void handleUnsubscription(SessionUnsubscribe event) {
...
}
}
Useful References:
SessionSubscribeEvent (For handling the subscriptions)
ConvertAndSend. (For sending the error messages to client.)
EDIT
Please try sending the exception from a channel Interceptor since the above did not send the exception , so that it gets propagated to the client. The map we defined earlier can be defined as a bean in a separate class accessible(with #Autowired) to both event handler(for incrementing and decrementing) and TopicSubscriptionInterceptor(for validation).
#Component
class TopicSubscriptionInterceptor implements ChannelInterceptor {
#Override
public Message<?> preSend(Message<?> message, MessageChannel channel){
StompHeaderAccessor accessor = StompHeaderAccessor.wrap(message);
String destination = accessor.getDestination();
String roomId = destination.substring(...); //Parsed RoomID
if(roomIdVsPlayerCount.get(roomId) == MAX_ALLOWED_PLAYERS) {
//Throw exception which will terminate that client connection
}
//Since it is not at limit continue
}
}
Useful reference for implementing a TopicSubscriptionInterceptor: TopicSubscriptionInterceptor

WebSphere MQ Messages Disappear From Queue

I figured I would toss a question on here incase anyone has ideas. My MQ Admin created a new queue and alias queue for me to write messages to. I have one application writing to the queue, and another application listening on the alias queue. I am using spring jmsTemplate to write to my queue. We are seeing a behavior where the message is being written to the queue but then instantly being discarded. We disabled gets and to see if an expiry parameter was being set somehow, I used the jms template to set the expiry setting (timeToLive). I set the expiry to 10 minutes but my message still disappears instantly. A snippet of my code and settings are below.
public void publish(ModifyRequestType response) {
jmsTemplate.setExplicitQosEnabled(true);
jmsTemplate.setTimeToLive(600000);
jmsTemplate.send(CM_QUEUE_NAME, new MessageCreator() {
public Message createMessage(Session session) throws JMSException {
String responseXML = null;
try {
responseXML myJAXBContext.getInstance().toXML(response);
log.info(responseXML);
TextMessage message = session.createTextMessage(responseXML);
return message;
} catch (myException e) {
e.printStackTrace();
log.info(responseXML);
return null;
}
}
});
}
/////////////////My settings
QUEUE.PUB_SUB_DOMAIN=false
QUEUE.SUBSCRIPTION_DURABLE=false
QUEUE.CLONE_SUPPORT=0
QUEUE.SHARE_CONV_ALLOWED=1
QUEUE.MQ_PROVIDER_VERSION=6
I found my issue. I had a parent method that had the #Transactional annotation. I do not want my new jms message to be part of that transaction so I am going to add jmsTemplate.setSessionTransacted(false); before performing a jmsTemplate.send. I have created a separate jmsTempalte for sending my new message instead of reusing the one that was existing, which needs to be managed.

How to set a Message Handler programmatically in Spring Cloud AWS SQS?

maybe someone has an idea to my following problem:
I am currently on a project, where i want to use the AWS SQS with Spring Cloud integration. For the receiver part i want to provide a API, where a user can register a "message handler" on a queue, which is an interface and will contain the user's business logic, e.g.
MyAwsSqsReceiver receiver = new MyAwsSqsReceiver();
receiver.register("a-queue-name", new MessageHandler(){
#Override
public void handle(String message){
//... business logic for the received message
}
});
I found examples, e.g.
https://codemason.me/2016/03/12/amazon-aws-sqs-with-spring-cloud/
and read the docu
http://cloud.spring.io/spring-cloud-aws/spring-cloud-aws.html#_sqs_support
But the only thing i found there to "connect" a functionality for processing a incoming message is a annotation on a method, e.g. #SqsListener or #MessageMapping.
These annotations are fixed to a certain queue-name, though. So now i am at a loss, how to dynamically "connect" my provided "MessageHandler" (from my API) to the incoming message for the specified queuename.
In the Config the example there is a SimpleMessageListenerContainer, which gets a QueueMessageHandler set, but this QueueMessageHandler does not seem
to be the right place to set my handler or to override its methods and provide my own subclass of QueueMessageHandler.
I already did something like this with the Spring Amqp integration and RabbitMq and thought, that it would be also similar here with AWS SQS.
Does anyone have an idea, how to accomplish this?
thx + bye,
Ximon
EDIT:
I found, that Spring JMS could actually do that, e.g. www.javacodegeeks.com/2016/02/aws-sqs-spring-jms-integration.html. Does anybody know, what consequences using JMS protocol has here, good or bad?
I am facing the same issue.
I am trying to go in an unusual way where I set up an Aws client bean at build time and then instead of using sqslistener annotation to consume from the specific queue I use the scheduled annotation which I can programmatically pool (each 10 secs in my case) from which queue I want to consume.
I did the example that iterates over queues defined in properties and then consumes from each one.
Client Bean:
#Bean
#Primary
public AmazonSQSAsync awsSqsClient() {
return AmazonSQSAsyncClientBuilder
.standard()
.withRegion(Regions.EU_WEST_1.getName())
.build();
}
Consumer:
// injected in the constructor
private final AmazonSQSAsync awsSqsClient;
#Scheduled(fixedDelay = 10000)
public void pool() {
properties.getSqsQueues()
.forEach(queue -> {
val receiveMessageRequest = new ReceiveMessageRequest(queue)
.withWaitTimeSeconds(10)
.withMaxNumberOfMessages(10);
// reading the messages
val result = awsSqsClient.receiveMessage(receiveMessageRequest);
val sqsMessages = result.getMessages();
log.info("Received Message on queue {}: message = {}", queue, sqsMessages.toString());
// deleting the messages
sqsMessages.forEach(message -> {
val deleteMessageRequest = new DeleteMessageRequest(queue, message.getReceiptHandle());
awsSqsClient.deleteMessage(deleteMessageRequest);
});
});
}
Just to clarify, in my case, I need multiple queues, one for each tenant, with the queue URL for each one passed in a property file. Of course, in your case, you could get the queue names from another source, maybe a ThreadLocal which has the queues you have created in runtime.
If you wish, you can also try the JMS approach where you create message consumers and add a listener to each one you wish (See the doc Aws Jms documentation).
When we do Spring and SQS we use the spring-cloud-starter-aws-messaging.
Then just create a Listener class
#Component
public class MyListener {
#SQSListener(value="myqueue")
public void listen(MyMessageType message) {
//process the message
}
}

Resources