I am working on real-time event-based application using Spring WebSockets, Messaging and RabbitMQ. In this application, messages need to be delivered to clients in the exact order they were inserted into RabbitMQ.
"EDITED"
Our goal is to recieve a message from a browser, process it on the server in order (against a unique object determined by route paramteter), enrich the message and broadcast it to all subscribing browsers via external STOMP MQ (RabbitMQ).
Our MessageMapping method is as follows:
#MessageMapping(/commands.{route}.{data})
public CommandMessage receiveCommand(CommandMessage message, Principal principal) {
try {
// Get object to synch on using route
Object o = ...
syncrhonized(o) {
// Perform command on object
// Set message server sequence
message.setServerSequence(o.getAutoIncrementSequence());
// Log server sequence
log.debug("Message server sequence:" + message.getServerSequence());
// Send to external MQ for broadcasting to all subscribers
return message;
}
} catch (Exception e) {
...
}
return null;
}
If we configure the ClientInboundChannel and ClientOutboundChannels with a corePoolSize and maxPoolSize of 1 each, all messages are in order.
If we increase the corePoolSize and maxPoolSize on the ClientInboundChannel, messages get to MQ in the incorrect order; if we do the same increase for ClientOutboundChannel, messages get to browser in incorrect order.
]
Tests were done using a single browser client.
We turned on tracing for StompBrokerRelayMessageHandler and received logs entries like:
2014-05-06 14:26:39 TRACE [clientInboundChannel-6] o.s.m.s.s.StompBrokerRelayMessageHandler [StompBrokerRelayMessageHandler.java:412] Processing message=[Payload byte[303]][Headers={stompCommand=SEND, nativeHeaders={content-type=[application/json;charset=UTF-8], destination=[/topic/commands.BKN01.20140318]}, simpMessageType=MESSAGE, simpDestination=/topic/commands.BKN01.20140318, contentType=application/json;charset=UTF-8, simpSessionId=ehjcoxb3, id=a31d0e3d-12cc-f562-1ec2-e2d7ba0899eb, timestamp=1399400799940}]
2014-05-06 14:26:39 DEBUG [clientInboundChannel-6] o.s.m.s.s.StompBrokerRelayMessageHandler [StompBrokerRelayMessageHandler.java:658] Forwarding message to broker
2014-05-06 14:26:39 TRACE [clientInboundChannel-3] o.s.m.s.s.StompBrokerRelayMessageHandler [StompBrokerRelayMessageHandler.java:406] Ignoring message to destination=/app/commands.BKN01.20140318
2014-05-06 14:26:39 TRACE [clientInboundChannel-7] o.s.m.s.s.StompBrokerRelayMessageHandler [StompBrokerRelayMessageHandler.java:406] Ignoring message to destination=/app/commands.BKN01.20140318
2014-05-06 14:26:39 TRACE [clientInboundChannel-1] o.s.m.s.s.StompBrokerRelayMessageHandler [StompBrokerRelayMessageHandler.java:412] Processing message=[Payload byte[314]][Headers={stompCommand=SEND, nativeHeaders={content-type=[application/json;charset=UTF-8], destination=[/topic/commands.BKN01.20140318]}, simpMessageType=MESSAGE, simpDestination=/topic/commands.BKN01.20140318, contentType=application/json;charset=UTF-8, simpSessionId=ehjcoxb3, id=3cc7b4ae-8ea4-ef8a-6c4d-c3bc1ed23bcd, timestamp=1399400799947}]
2014-05-06 14:26:39 DEBUG [clientInboundChannel-1] o.s.m.s.s.StompBrokerRelayMessageHandler [StompBrokerRelayMessageHandler.java:658] Forwarding message to broker
We also turned tracing on for StompSubProtocolHandler in the org.springframework.web.socket.messaging package and received messages like:
2014-05-07 10:58:58 TRACE [http-nio-8080-exec-5] o.s.w.s.m.StompSubProtocolHandler [StompSubProtocolHandler.java:180] Received message from client session=u8wrnsr6
None of the information provides an easy way to map our message.serverSequence property (which is set prior to sending to MQ) with the various ids the log details.
Is there any way to increase inbound/outbound channel threads so ordering is intact? For example, could the channels be tied to a "route" or could a thread be pegged to a "route"?
Please help.
Thank you,
Dan
[EDITED]
Thanks, I see now. Indeed at present there is no way to ensure messages from the external broker are delivered to a client in the exact same order and that messages from a client will be sent to the external broker in the exact same order.
We could assign an index to every Message within a session and then check and buffer if necessary to enforce the order but that would be a new feature. Feel free to create a request in JIRA.
As for now you could look into creating a ChannelInterceptor that checks every message to see what session it belongs to and then set some incremental index header on it before having it sent (on the clientInboundChannel or clientOutboundChannel). Then extend StompSubProtocolHandler and StompBrokerRelayMessageHandler to check the index header and try to enforce the order of messages.
Related
The below shows a snippet for the function, please suggest how to send data to different topics based on if it has error or not
public Function<KStream<String,?>, KStream<String,?>> process(){
return input -> input.map(key, value) {
try{
// logic of function here
}catch(Exception e) {
// How do I send to different topic from here??
}
return new KeyValue<>(key,value);
}
}
Set the kafka consumer binding's enableDlq option to true; when the listener throws an exception the record is sent to the dead letter topic after retries are exhausted. If you want to fail immediately, set the consumer binding's maxAttempts property to 1 (default is 3).
See the documentation.
enableDlq
When set to true, it enables DLQ behavior for the consumer. By default, messages that result in errors are forwarded to a topic named error.<destination>.<group>. The DLQ topic name can be configurable by setting the dlqName property or by defining a #Bean of type DlqDestinationResolver. This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome. See Dead-Letter Topic Processing processing for more information. Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: x-original-topic, x-exception-message, and x-exception-stacktrace as byte[]. By default, a failed record is sent to the same partition number in the DLQ topic as the original record. See Dead-Letter Topic Partition Selection for how to change that behavior. Not allowed when destinationIsPattern is true.
Default: false.
I am trying implement sample retry mechanism for RabbitMQ using Spring's StatefulRetryOperationsInterceptor.
As stated in documentation, I need to setup message key generator as the message id is absent. What I don't understand is the real usage of unique id generated per message. i.e. when I used below implementation I did not have any issue with retry:
StatefulRetryOperationsInterceptor interceptor =
RetryInterceptorBuilder.stateful()
.maxAttempts(3)
.backOffOptions(2000, 1, 2000)
.messageKeyGenerator(
new MessageKeyGenerator() {
#Override
public Object getKey(Message message) {
return 1;
}
);
container.setAdviceChain(new Advice[] {interceptor});
Stateful retry needs the originating message to be somehow unique - so the retry "state" for the message can be determined - the simplest way is to have the message publisher add a unique message id header.
However, it's possible something in your message body or some other header might be used to uniquely identify the message. Enter the MessageKeyGenerator which is used to determine the unique id.
Using a constant (1 in your case) won't work because every message has the same message key and will all be considered to be deliveries of the same message (from a retry perspective).
The framework does provide a MissingMessageIdAdvice which can provide limited support for stateful retry (if added to the advice chain before the retry advice). It adds a messageId to the incoming message.
"Limited" means full stateful retry support is not available - only one redelivery attempt is allowed.
If the redelivery fails, the message is rejected which causes it to be discarded or routed to the DLX/DLQ if so configured. In all cases the "temporary" state is removed from the cache.
Generally, if full retry support is needed and there is no messageId property and there is no way to generate a unique key with a MessageKeyGenerator, I would recommend using stateless retry.
We are using spring integration and daily in our logs we can see below stacktrace. Other JMS adapters are working fine, we think only below one is missing something:
Spring integration configuration:
<jms:message-driven-channel-adapter concurrent-consumers="1" id="jmsInLOAN" destination="queueLOAN" channel="LOANCommonDataChannel" acknowledge="transacted" />
Please find below MQ statistics of Put and Msgs read count, there should be exact count of Message read by adapter. I am worried about spring integration's message-driven-channel-adapter of reading extra messages from queue. Any help would be appreciated.
WARN 07/Jan/2016 09:04:15,438 [org.springframework.jms.listener.DefaultMessageListenerContainer#23-1] springframework.jms.listener.DefaultMessageListenerContainer - [SYSTEM_ID=HBUSLOANIQ] [MESSAGE_ID=null] Execution of JMS message listener failed, and no ErrorHandler has been set.
org.springframework.integration.MessagingException: unsupported payload type [com.ibm.jms.JMSMessage]
at org.springframework.integration.xml.DefaultXmlPayloadConverter.convertToDocument(DefaultXmlPayloadConverter.java:76)
at org.springframework.integration.xml.DefaultXmlPayloadConverter.convertToNode(DefaultXmlPayloadConverter.java:88)
at org.springframework.integration.xml.router.XPathRouter.getChannelIdentifiers(XPathRouter.java:119)
at org.springframework.integration.router.AbstractMessageRouter.determineTargetChannels(AbstractMessageRouter.java:247)
at org.springframework.integration.router.AbstractMessageRouter.handleMessageInternal(AbstractMessageRouter.java:211)
It looks like you are passing the unconverted JMS message (com.ibm.jms.JMSMessage) to the XML Payload converter...
org.springframework.integration.MessagingException: unsupported payload type [com.ibm.jms.JMSMessage]
at org.springframework.integration.xml.DefaultXmlPayloadConverter.convertToDocument(DefaultXmlPayloadConverter.java:76)
Perhaps you have set extract-payload to false ?
Although it's not in the configuration you show.
Turning on DEBUG logging will show the payload type of messages passing through the system.
The issue was because of valid and poisonous(which has a payload type [com.ibm.jms.JMSMessage]) messages we were getting onto queue. Valid messages processed well, but poisounous messages not able to digest by application and send to the BackoutQueue.
In our case, BOQ threshold is 3 that means 3 times my application will try to consume a perticular message and if the message backout 3 times then it will be moved to BOQ queue and (msgs read - msgs put)/3 on LOAIQ == the msgs put on to BOQ queue at that sampling interval. From msgs put on BOQ queue, we can see how many messages are backout from LOAIQ queue. That is why the message read count is more than that of msg received.
So I have request/response queues that I am putting messages on and reading messages off from.
The problem is that I have multiple local instances that are reading/feeding off the same queues, and what happens sometimes is that one instance can read some other instance's reply message.
So is there a way I can configure my JMS, using spring that actually makes the instances read the messages that are only requested by them and not read other instance's messages.
I have very little knowledge about JMS and related stuff. So if the above question needs more info then I can dig around and provide it.
Thanks
It's easy!
A JMS message have two properties you can use - JMSMessageID and JMSCorrelationID.
A JMSMessageId is supposed to be unique for each message, so you could do something like this:
Let the client send a request, then start to listen for responses where the correlation id = the sent message id. The server side is then responsible for copying the message id of the request to the correlation id of the response. Something like: responseMsg.setJMSCorrelationID(requestMsg.getJMSMessageID());
Example client side code:
Session session = getSession();
Message msg = createRequest();
MessageProducer mp = session.createProducer(session.createQueue("REQUEST.QUEUE"));
mp.send(msg,DeliveryMode.NON_PERSISTENT,0,TIMEOUT);
// If session is transactional - commit now.
String msgID = msg.getJMSMessageID();
MessageConsumer mc = session.createConsumer(session.createQueue("REPLY.QUEUE"),
"JMSCorrelationID='" + msgId + "'");
Message response = mc.receive(TIMEOUT);
A more performant solution would be to use dedicated reply queues per destination. Simply set message.setJMSReplyTo(session.createQueue("REPLY.QUEUE."+getInstanceId())); and make sure the server side sends response to requestMsg.getJMSReplyTo() and not to a hard coded value.
In my application, I have a queue (HornetQ) set up on JBoss 7 AS.
I have used Spring batch to do some work once the messages is received (save values in database etc.) and then the consumer commits the JMS session.
Sometimes when there is an exception while processing the message, the excecution of consumer is aborted abruptly.
And the message remains in "in delivery" state. There are about 30 messages in this state on my production queue.
I have tried restarting the consumer but the state of these messages is not changed. The only way to remove these
messages from the queue is to restart the queue. But before doing that I want a way to read these messages so
that they can be corrected and sent to the queue again to be processed.
I have tried using QueueBrowser to read them but it does not work. I have searched a lot on Google but could not
find any way to read these messages.
I am using a Transacted session, where once the message is processed, I am calling:
session.commit();
This sends the acknowledgement.
I am implementing spring's
org.springframework.jms.listener.SessionAwareMessageListener
to recieve messages and then to process them.
While processing the messages, I am using spring batch to insert some data in database.
For a perticular case, it tries to insert data too big to be inserted in a column.
It throws an exception and transaction is aborted.
Now, I have fixed my producer and consumer not to have such data, so that this case should not happen again.
But my question is what about the 30 "in delivery" state messages that are in my production queue? I want to read them so that they can be corrected and sent to the queue again to be processed. Is there any way to read these messages? Once I know their content, I can restart the queue and submit them again (after correcting them).
Thanking you in anticipation,
Suvarna
It all depends on the Transaction mode you are using.
for instance if you use transactions:
// session here is a TX Session
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
Message msg = consumer.receive...
session.rollback(); // this will make the messages to be redelivered
if you are using non TX:
// session here is auto-ack
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
// this means the message is ACKed as we receive, doing autoACK
Message msg = consumer.receive...
//however the consumer here could have a buffer from the server...
// if you are not using the consumer any longer.. close it
consumer.close(); // this will release messages on the client buffer
Alternatively you could also set consumerWindowSize=0 on the connectionFactory.
This is on 2.2.5 but it never changed on following releases:
http://docs.jboss.org/hornetq/2.2.5.Final/user-manual/en/html/flow-control.html
I"m covering all the possibilities I could think of since you're not being specific on how you are consuming. If you provide me more detail then I will be able to tell you more:
You can indeed read your messages in the queue using jmx (with for example jconsole)
In Jboss As7 you can do it the following way :
MBeans>jboss.as>messaging>default>myJmsQueue>Operations
listMessagesAsJson
[edit]
Since 2.3.0 You have a dedicated method for this specific case :
listDeliveringMessages
See https://issues.jboss.org/browse/HORNETQ-763