I am using redis as a queue (using the spring queue-in/outbound-channel-adapter) to distribute tasks (a message into the queue, etc)
As the throughput is quite high we observed that, although the messages were sent to the redis queue, a lot of them were lost and no messages arrived to the component after the inbound (a header router)
The channel config is attached below; the point is that we though that the problem was in this header router after the inbound addapter, that was unable to manage the rate of messages read from the queue, so they were lost.
We have use an intermediate element between the inbound adapter and this component (that is a header-router) and add a queue to fix this.
This works fine, but actually we don't fully understand the solution and if this is the proper one.
An expert view and opinion about this configuration will be wellcome!
Thanks
<!-- a Queue Inbound Channel Adapter is available to 'right pop' messages
from a Redis List. -->
<redis:queue-inbound-channel-adapter
id="fromRedis" channel="in" queue="${name}"
receive-timeout="1000" recovery-interval="3000" expect-message="true"
auto-startup="true"/>
<!-- a queue to avoid lost messages before the header router -->
<int:channel id="in">
<int:queue capacity="1000"/>
</int:channel>
<!-- a bridge to connect channels and have a poller -->
<int:bridge input-channel="in" output-channel="out">
<int:poller fixed-delay="500" />
</int:bridge>
<int:header-value-router id="router" timeout="15000"
input-channel="out" header-name="decision"
resolution-required="false" default-output-channel="defaultChannel" />
---added on 26/02
To insert messages into redis we have a web service, but actually is as you said, simply write messages into redis (
for... channel.send(msg)
Nothing more
About your answer I am now thinking in remove the in channel and its queue and use directly the header-value-router; but I have more questions:
I think the right solution is a low value for timeout in header-value-router, so I'll have the error notification faster if we don't have a consumer available. If I don't use a value as timeout, it will block indefinitely and this is a bad idea, isn't it?
I don't know how to manage the MesssageDeliveryException because the router don't have an error-channel configuration, ???
I think that if I can manage this error and get the message back I can re-send it to redis again. There are other servers that get the messages from redis and they luckily could attend it.
I add my proposed solution, but is not complete and we are not sure about the error management as I explained above
<!-- a Queue Inbound Channel Adapter is available to 'right pop' messages
from a Redis List. -->
<redis:queue-inbound-channel-adapter
id="fromRedis" channel="in" queue="${name}"
receive-timeout="1000" recovery-interval="3000" expect-message="true"
auto-startup="true"/>
<!-- a header-value-router with quite low timeout -->
<int:header-value-router id="router" timeout="150"
input-channel="in" header-name="decision"
resolution-required="false" default-output-channel="defaultChannel" />
<!-- ¿if MessageDeliveryException???? what to do??? -->
<int:channel id="someConsumerHeaderValue">
<int:dispatcher task-executor="ConsumerExecutor" />
</int:channel>
<!-- If 5 threads are busy we queue messages up to 5; if queue is full we can increase to 5 more working threads; if no more threads we have a... ¿¿MessageDeliveryException?? -->
<task:executor id="ConsumerExecutor" pool-size="5-5"
queue-capacity="5" />
Well, that's great to see such an observation. That might improve the Framework somehow.
So, I'd like to see:
Some test-case to reproduce from the Framework perspective.
Although I guess there is just enough to send a lot of messages to the Redis and use your config to consume. (Correct me if there is need anything else)
The downstream flow after the <int:header-value-router>. Look, you use there timeout="15000" which is synonym to the send-timeout :
Specify the maximum amount of time in milliseconds to wait
when sending Messages to the target MessageChannels if blocking
is possible (e.g. a bounded queue channel that is currently full).
By default the send will block indefinitely.
Synonym for 'timeout' - only one can be supplied.
From here I can say that if your downstream consumer if enough slow on some QueueChannel there you end up with the:
/**
* Inserts the specified element at the tail of this queue, waiting if
* necessary up to the specified wait time for space to become available.
*
* #return {#code true} if successful, or {#code false} if
* the specified waiting time elapses before space is available
* #throws InterruptedException {#inheritDoc}
* #throws NullPointerException {#inheritDoc}
*/
public boolean offer(E e, long timeout, TimeUnit unit)
....
while (count.get() == capacity) {
if (nanos <= 0)
return false;
nanos = notFull.awaitNanos(nanos);
}
Pay attention to that return false; indicating exactly the message lost.
That is also know like back-pressure drop strategy.
Let me know if you have different picture there.
You may consider to remove that timeout="15000" to meet the same in queue channel behavior.
UPDATE
Well, the error handling works a bit different way. The "guilty" component just throws Exception, like it is with raw Java and it is OK that this component isn't responsible for Exception catching that is up to the caller.
And caller in our case an upstream component - <redis:queue-inbound-channel-adapter>.
Any inbound channel adapter has an error-channel option. Through the <poller> if it is MessageSource or directly when it is MessageProducer.
I'm sure you will be able to handle:
if (!sent) {
throw new MessageDeliveryException(message,
"failed to send message to channel '" + channel + "' within timeout: " + timeout);
}
in that error-channel sub-flow and achieve your requirements for recovery.
Related
I have listeners configured in XML like this
<rabbit:listener-container connection-factory="connectionFactory" concurrency="1" acknowledge="manual">
<rabbit:listener ref="messageListener" queue-names="${address.queue.s1}" exclusive="true"/>
<rabbit:listener ref="messageListener" queue-names="${address.queue.s2}" exclusive="true"/>
<rabbit:listener ref="messageListener" queue-names="${address.queue.s3}" exclusive="true"/>
<rabbit:listener ref="messageListener" queue-names="${address.queue.s4}" exclusive="true"/>
<rabbit:listener ref="messageListener" queue-names="${address.queue.s5}" exclusive="true"/>
<rabbit:listener ref="messageListener" queue-names="${address.queue.s6}" exclusive="true"/>
</rabbit:listener-container>
I am trying to move that to Java Configuration and I don't see a way to add more than one MessageListener to a ListenerContainer. Creating multiple ListenerContainer beans is not an option in my case because I would not know the number of queues to consume from until runtime. Queue names will come from a configuration file.
I did the following
#PostConstruct
public void init()
{
for (String queue : queues.split(","))
{
// The Consumers would not connect if I don't call the 'start()' method.
messageListenerContainer(queue).start();
}
}
#Bean
public SimpleMessageListenerContainer messageListenerContainer(String queue)
{
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(consumerConnectionFactory);
container.setQueueNames(queue);
container.setMessageListener(messageListener());
// Set Exclusive Consumer 'ON'
container.setExclusive(true);
// Should be restricted to '1' to maintain data consistency.
container.setConcurrentConsumers(1);
container.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return container;
}
It "sort" of works BUT I see some weird behavior with lots of ghost channels getting opened which never used to happen with the XML configuration. So it makes me suspicious that I am doing something wrong. I would like to know the correct way of creating MessageListenerContainers in Java configuration? Simply put, "How does Spring convert 'rabbit:listener-container' with multiple 'rabbit:listener' to java objects properly?" Any help/insight into this would be greatly appreciated.
Business Case
We have a Publisher that publishes User Profile Updates. The publisher could dispatch multiple updates for the same use and we have to process them in the correct order to maintain data integrity in the data store.
Example : User : ABC, Publish -> {UsrA:Change1,...., UsrA:Change 2,....,UsrA:Change 3} -> Consumer HAS to process {UsrA:Change1,...., UsrA:Change 2,....,UsrA:Change 3} in that order.
In our previous setup, we had 1 Queue that got all the User Updates and we had a consumer app with concurrency = 5. There were multiple app servers running the consumer app. That resulted in *5 * 'Number of instances of the consumer app' channels/threads* that could process the incoming messages. The speed was GREAT! but we were having out of order processing quite often resulting in data corruption.
To maintain strict FIFO order and still process message parallelly as much as possible, we implemented queue Sharding. We have a "x-consistent-hash with a hash-header on employee-id. Our Publisher publishes messages to the hash exchange and we have multiple sharded queues bound to the hash exchange. The idea is, we will have all changes for a given user (User A for example) queued up in the same shard. We then have our consumers connect to the sharded queues in 'Exclusive' mode and 'ConcurrentConsumers = 1' and process the messages. That way we are sure to process messages in the correct order while still processing messages parallelly. We could make it more parallel by increasing the number of shards.
Now on to the consumer configuration
We have the consumer app deployed on multiple app servers.
Original Approach:
I simply added multiple 'rabbit:listener' to my 'rabbit:listener-container' in my consumer app as you can see above and it works great except for the server that starts first get an exclusive lock on all the sharded queues and the other servers are just sitting there doing no work.
New Approach:
We moved the sharded queue names to the application configuration file. Like so
Consumer Instance 1 : Properties
queues=user.queue.s1,user.queue.s2,user.queue.s3
Consumer Instance 2 : Properties
queues=user.queue.s4,user.queue.s5,user.queue.s6
Also worth noting, we could have Any number of Consumer instances and the shards could be distributed unevenly between instances depending on resource availability.
With the queue names moved to configuration file, the XML confiugration will no longer work because we cannot dynamically add 'rabbit:listener' to my 'rabbit:listener-container' like we did before.
Then we decided to switch over to the Java Configuration. That is where we are STUCK!.
We did this initially
#Bean
public SimpleMessageListenerContainer messageListenerContainer()
{
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(consumerConnectionFactory);
container.setQueueNames(queues.split(","));
container.setMessageListener(messageListener());
container.setMissingQueuesFatal(false);
// Set Exclusive Consumer 'ON'
container.setExclusive(true);
// Should be restricted to '1' to maintain data consistency.
container.setConcurrentConsumers(1);
container.setAcknowledgeMode(AcknowledgeMode.MANUAL);
container.start();
return container;
}
and it works BUT all our queues are on one connection sharing 1 channel. That is NOT good for speed. What we want is One connection and every queue gets its own channel.
Next Step
No success here YET!. The java configuration in my original question is where we are at now.
I am baffled why this is so HARD to do. Clearly the XML configuration does something that is NOT easly doable in Java confiugration (Or atleast it feel sthat way to me). I see this as a gap that needs to be filled unless I am compeltly missing something. Please correct me if I am wrong. This is a genuine business case NOT some ficticious edge case. Please feel free to comment if you think otherwise.
and it works BUT all our queues are on one connection sharing 1 channel. That is NOT good for speed. What we want is One connection and every queue gets its own channel.
If you switch to the DirectMessageListenerContainer, each queue in that configuration gets its own Channel.
See the documentation.
To answer your original question (pre-edit):
#Bean
public SimpleMessageListenerContainer messageListenerContainer1(#Value("${address.queue.s1}") String queue)
{
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(consumerConnectionFactory);
container.setQueueNames(queue);
container.setMessageListener(messageListener());
// Set Exclusive Consumer 'ON'
container.setExclusive(true);
// Should be restricted to '1' to maintain data consistency.
container.setConcurrentConsumers(1);
container.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return container;
}
...
#Bean
public SimpleMessageListenerContainer messageListenerContainer6(#Value("${address.queue.s6}" ) String queue)
{
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(consumerConnectionFactory);
container.setQueueNames(queue);
container.setMessageListener(messageListener());
// Set Exclusive Consumer 'ON'
container.setExclusive(true);
// Should be restricted to '1' to maintain data consistency.
container.setConcurrentConsumers(1);
container.setAcknowledgeMode(AcknowledgeMode.MANUAL);
return container;
}
Here is the Java Configuration for creating SimpleMessageListenerContainer
#Value("#{'${queue.names}'.split(',')}")
private String[] queueNames;
#Bean
public SimpleMessageListenerContainer listenerContainer(final ConnectionFactory connectionFactory) {
final SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.setConnectionFactory(connectionFactory);
container.setQueueNames(queueNames);
container.setMessageListener(vehiclesReceiver());
setCommonQueueProperties(container);
return container;
}
Each <rabbit:listener > creates its own SimpleListenerContainer bean with the same ConnectionFactory. To do similar in Java config, you have to declare as much SimpleListenerContainer beans as you have queues: one for each of them.
You also may consider to use #RabbitListener approach instead: https://docs.spring.io/spring-amqp/docs/2.0.4.RELEASE/reference/html/_reference.html#async-annotation-driven
At unit test time, I try to bridge the Spring Integration default channel to a queued channel, since I want to check the correctness of the amount of message flow into this channel.
<int:filter input-channel="prevChannel" output-channel="myChannel">
<int:bridge input-channel="myChannel" output-channel="aggregateChannel">
// the reason I have above bridge is I want to check the amount of message after filter.
// I cannot check prevChannel since it is before filtering, and I cannot check aggregateChannel
// because it has other processing branch
// in test xml I import above normal workflow xml and added below configuration to
// redirect message from myChannel to checkMyChannel to checking.
<int:bridge input-channel="myChannel"
output-channel="checkMyChannel"/>
<int:channel id="checkMyChannel">
<int:queue/>
</int:channel>
I autowired checkMyChannel in my unit test
but checkMyChannel.getqueuesize() always return 0.
Is there sth I did wrong?
You missed to share test-case with us. We don't have the entire picture. And looks like you have a race condition there. Someone polls all your messages from the checkMyChannel before you start assert that getqueuesize().
In our tests we don't use <poller> for such a cases. We use PollableChannel.receive(timeout) manually.
got this fixed, I have to declare myChannel to be a publish-subscribe-channel
How to test Spring Integration
This one helps, for my case there is a race condition since.
"A regular channel with multiple subscribers will round-robin ..."
Using spring integration, for every min i need to read list of orders from database whose status is in-progress and make a 3rd party rest call for each order in sequence or parallel. Below is my code
<int:inbound-channel-adapter ref="orderReader" method="readOrderRecords"
channel="orderListChannel">
<int:poller cron="0/60 * * * * *"/>
</int:inbound-channel-adapter>
<int:splitter input-channel="orderListChannel" method="split" ref="orderSplitter"
output-channel="processOrders">
</int:splitter>
<int:publish-subscribe-channel id="processOrders"/>
<int:chain id="orderProcess_Chain"
input-channel="processOrders">
...contain the REST call config
</int.chain>
The above code is not working as expected, if there are n records in the database with in-progress, its processing only the first order (orderProcess_Chain is called only for 1st order)
What is wrong in the code
First of all we need to see your DB reading logic, that your orderReader.readOrderRecords().
From the other hand you should be sure that finish your REST call properly.
It is actually request/reply protocol, but I see only one-way interaction.
Right, that is correct for your purpose, but you should void the REST response somehow.
And one more clue. You always can switch on DEBUG for the org.springframework.integration category and analyze how your messages travel there. For example you may have some Exception, but there is no any exception handling in your config.
We are using spring integration and daily in our logs we can see below stacktrace. Other JMS adapters are working fine, we think only below one is missing something:
Spring integration configuration:
<jms:message-driven-channel-adapter concurrent-consumers="1" id="jmsInLOAN" destination="queueLOAN" channel="LOANCommonDataChannel" acknowledge="transacted" />
Please find below MQ statistics of Put and Msgs read count, there should be exact count of Message read by adapter. I am worried about spring integration's message-driven-channel-adapter of reading extra messages from queue. Any help would be appreciated.
WARN 07/Jan/2016 09:04:15,438 [org.springframework.jms.listener.DefaultMessageListenerContainer#23-1] springframework.jms.listener.DefaultMessageListenerContainer - [SYSTEM_ID=HBUSLOANIQ] [MESSAGE_ID=null] Execution of JMS message listener failed, and no ErrorHandler has been set.
org.springframework.integration.MessagingException: unsupported payload type [com.ibm.jms.JMSMessage]
at org.springframework.integration.xml.DefaultXmlPayloadConverter.convertToDocument(DefaultXmlPayloadConverter.java:76)
at org.springframework.integration.xml.DefaultXmlPayloadConverter.convertToNode(DefaultXmlPayloadConverter.java:88)
at org.springframework.integration.xml.router.XPathRouter.getChannelIdentifiers(XPathRouter.java:119)
at org.springframework.integration.router.AbstractMessageRouter.determineTargetChannels(AbstractMessageRouter.java:247)
at org.springframework.integration.router.AbstractMessageRouter.handleMessageInternal(AbstractMessageRouter.java:211)
It looks like you are passing the unconverted JMS message (com.ibm.jms.JMSMessage) to the XML Payload converter...
org.springframework.integration.MessagingException: unsupported payload type [com.ibm.jms.JMSMessage]
at org.springframework.integration.xml.DefaultXmlPayloadConverter.convertToDocument(DefaultXmlPayloadConverter.java:76)
Perhaps you have set extract-payload to false ?
Although it's not in the configuration you show.
Turning on DEBUG logging will show the payload type of messages passing through the system.
The issue was because of valid and poisonous(which has a payload type [com.ibm.jms.JMSMessage]) messages we were getting onto queue. Valid messages processed well, but poisounous messages not able to digest by application and send to the BackoutQueue.
In our case, BOQ threshold is 3 that means 3 times my application will try to consume a perticular message and if the message backout 3 times then it will be moved to BOQ queue and (msgs read - msgs put)/3 on LOAIQ == the msgs put on to BOQ queue at that sampling interval. From msgs put on BOQ queue, we can see how many messages are backout from LOAIQ queue. That is why the message read count is more than that of msg received.
In my application, I have a queue (HornetQ) set up on JBoss 7 AS.
I have used Spring batch to do some work once the messages is received (save values in database etc.) and then the consumer commits the JMS session.
Sometimes when there is an exception while processing the message, the excecution of consumer is aborted abruptly.
And the message remains in "in delivery" state. There are about 30 messages in this state on my production queue.
I have tried restarting the consumer but the state of these messages is not changed. The only way to remove these
messages from the queue is to restart the queue. But before doing that I want a way to read these messages so
that they can be corrected and sent to the queue again to be processed.
I have tried using QueueBrowser to read them but it does not work. I have searched a lot on Google but could not
find any way to read these messages.
I am using a Transacted session, where once the message is processed, I am calling:
session.commit();
This sends the acknowledgement.
I am implementing spring's
org.springframework.jms.listener.SessionAwareMessageListener
to recieve messages and then to process them.
While processing the messages, I am using spring batch to insert some data in database.
For a perticular case, it tries to insert data too big to be inserted in a column.
It throws an exception and transaction is aborted.
Now, I have fixed my producer and consumer not to have such data, so that this case should not happen again.
But my question is what about the 30 "in delivery" state messages that are in my production queue? I want to read them so that they can be corrected and sent to the queue again to be processed. Is there any way to read these messages? Once I know their content, I can restart the queue and submit them again (after correcting them).
Thanking you in anticipation,
Suvarna
It all depends on the Transaction mode you are using.
for instance if you use transactions:
// session here is a TX Session
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
Message msg = consumer.receive...
session.rollback(); // this will make the messages to be redelivered
if you are using non TX:
// session here is auto-ack
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
// this means the message is ACKed as we receive, doing autoACK
Message msg = consumer.receive...
//however the consumer here could have a buffer from the server...
// if you are not using the consumer any longer.. close it
consumer.close(); // this will release messages on the client buffer
Alternatively you could also set consumerWindowSize=0 on the connectionFactory.
This is on 2.2.5 but it never changed on following releases:
http://docs.jboss.org/hornetq/2.2.5.Final/user-manual/en/html/flow-control.html
I"m covering all the possibilities I could think of since you're not being specific on how you are consuming. If you provide me more detail then I will be able to tell you more:
You can indeed read your messages in the queue using jmx (with for example jconsole)
In Jboss As7 you can do it the following way :
MBeans>jboss.as>messaging>default>myJmsQueue>Operations
listMessagesAsJson
[edit]
Since 2.3.0 You have a dedicated method for this specific case :
listDeliveringMessages
See https://issues.jboss.org/browse/HORNETQ-763