What is the life cycle of consumers? - masstransit

Are message consumers created only once when bus starts, or every time when message of corresponding type appears at the endpoint?
I mean this method of subscribing to messages:
cfg.ReceiveEndpoint(host, "customer_update_queue", e =>
{
e.Consumer<UpdateCustomerConsumer>();
});

A new consumer instance is created for each message received on the endpoint. Once the message is consumed, the consumer instance is released (and if it's IDisposable, disposed as well).

Related

How to ensure that JMSTemplate caches consumer i.e. com.ibm.mq.jms.MQQueueReceiver?

I am facing a scenario where the reply queue I connect to, runs out of handles. I have traced it to the fact that my JMS Producers are being cached but not my JMS consumers. I am able to send and receive messages just fine so there is no problem with connecting-sending-receiving to/from the queues. I am using the CachedConnectionFactory (SessionCacheSize = 10)with the target factory as com.ibm.mq.jms.MQQueueConnectionFactory while instantiating the jmsTemplate. Code snippet is as follows
:
:
String replyQueue = "MyQueue";// replyQueue which runs out of handles
messageCreator.setReplyToQueue(new MQQueue(replyQueue));
jmsTemplate.setReceiveTimeout(receiveTimeout);
jmsTemplate.send(destination, messageCreator);// Send to destination queue
Message message = jmsTemplate.receiveSelected(replyQueue,
String.format("JMSCorrelationID = '%s'", messageCreator.getMessageId()));
:
:
From the logs (jms TRACE is enabled) Producer is cached, so the destination queue "handle count" does not increase.
// The first time around (for Producer)
Registering cached JMS MessageProducer for destination[queue:///<destination>:com.ibm.mq.jms.MQQueueSender#c9x758b
// Second time around, the cached producer is reused
Found cached JMS MessageProducer for destination [queue:///<destination>]: com.ibm.mq.jms.MQQueueSender#c9x758b
However, the handles for the replyQueue keep increasing because for every call to that queue, I see a new JMS Consumer being registered. Ultimately the calls to open the replyQueue fail because of MQRC_HANDLE_NOT_AVAILABLE
// First time around
Registering cached JMS MessageConsumer for destination [queue:///<replyQueue>]:com.ibm.mq.jms.MQQueueReceiver#b3ytd25b
// Second time around, another MessageConsumer is registered !
Registering cached JMS MessageConsumer for destination [queue:///<replyQueue>]:com.ibm.mq.jms.MQQueueReceiver#re25b
My memory is a bit dim on this, but here is what is happening. You are receiving messages based on a message selector. This selector is always changing, however. As a test, either remove the selector or make it a constant and see what happens. So when you try to cache/pool based on connection/session/consumer, the consumer is always changing. This requires a new cache entry.
After you go through your 10 sessions, a new connection will be created, but the existing one is not closed. Increase your session count to 100, for example, and your connection count on the MQ broker should climb 10 time slower.
You need to create a new consumer for every message receive as your correlation ID is always changing. So just cache connection/session. No matter what you do, you will always have to round trip to the broker to ask for the new correlation ID.

Spring integration messages queue

I have jms message endpoint like:
#Bean
public JmsMessageDrivenEndpoint fsJmsMessageDrivenEndpoint(ConnectionFactory fsConnectionFactory,
Destination fsInboundDestination,
MessageConverter fsMessageConverter) {
return Jms.messageDrivenChannelAdapter(fsConnectionFactory)
.destination(fsInboundDestination)
.jmsMessageConverter(fsMessageConverter)
.outputChannel("fsChannelRouter.input")
.errorChannel("fsErrorChannel.input")
.get();
}
So, my questions is did I get next message before current message will be processed? If it will...Did it will get all messages in mq queue until it fills up all the memory? How to avoid it?
The JmsMessageDrivenEndpoint is based on the JmsMessageListenerContainer, its threading model and MessageListener callback for pulled messages. As long as your MessageListener blocks, it doesn't go to the next message in the queue to pull. When we build an integration flow starting with JmsMessageDrivenEndpoint, it becomes as a MessageListener callback. As long as we process the message downstream in the same thread (DirectChannel by default in between endpoints), we don't pull the next message from JMS queue. If you place a QueueChannel or an ExecutorChannel in between, you shift a processing to a different thread. The current one (JMS listener) gets a control back and it is ready to pull the next message. And in this case your concern about the memory is correct. You can still use QueueChannel with limited size or your ExecutorChannel can be configured with limited thread pool.
In any way my recommendation do not do any thread shifting in the flow when you start from JMS listener container. It is better to block for the next message and let the current transaction to finish its job. So you won't lose a message when something crashes.

Rabbitmq suggestion for implementing call back queue feature

I came across call back queue feature in RMQ. And its pretty fancy too. The whole idea is I have created One Message queue (queue1), its callback queue(queue1_cb) and its dlq(queue1_dlq). I am implementing HA feature with 2 nodes.
The problem comes when I am deploying 2 instances of my application(I have one sender and one receiver app in Spring boot). Both are listening to same HA cluster. The scenario is as below.
Sender publishes a message to RMQ.
Receiver app consumes message. Receiver app has to call third party API which is socket based API and its asynchronous so i do not get response in same connection. SO i store object of Channel & Message which i need to ack the message. (Please note i am delaying the ack till i receive response from third party API.
When i deploy 2 instances of receiver app, any instance will get response from third party API. And both will not have object of Channel and Message to ack message and send message to callback queue.
Can any one suggest me a solution on proiority?
Below is my code.
At Receiver side :
#Override
public void onMessage(Message arg0, Channel arg1) throws Exception {
String msg = new String (arg0.getBody());
AppObject obj = mapper.readValue(msg, AppObject.class);
Packet packet = new Packet();
packet.setChannel(arg1);
packet.setMessage(arg0);
packet.setAppObject(obj);
AppParam.objects.put(
String.valueOf(key , packet);
//Call third party API
}
At the time of acking and sending callback message:
public boolean pushMessageToCallBack(String key , AppObject packet, Channel channel, Message message){
RabbitTemplate replyRabbitTemplate = //Get the RabbitTemplate object. It is handled properly.
replyRabbitTemplate.convertAndSend(packet);
channel.basicAck(message.getMessageProperties().getDeliveryTag(), false);
}
You need a different callback queue for each instance or, more simply, just use Direct Reply-to where you don't need a queue at all.

RabbitMQ multiple acknowledges to same message closes the consumer

If I acknowledge the same message twice using the Delivery.Ack method, my consumer channel just closes by itself.
Is this expected behaviour? Has anyone experienced this ?
The reason I am acknowledging the same message twice is a special case where I have to break the original message into copies and process them on the consumer. Once the consumer processes everything, it loops and acks everything. Since there are copies of the entity, it acks the same message twice and my consumer channel shuts down
According to the AMQP reference, a channel exception is raised when a message gets acknowledged for the second time:
A message MUST not be acknowledged more than once. The receiving peer
MUST validate that a non-zero delivery-tag refers to a delivered
message, and raise a channel exception if this is not the case.
Second call to Ack(...) for the same message will not return an error, but the channel gets closed due to this exception received from server:
Exception (406) Reason: "PRECONDITION_FAILED - unknown delivery tag ?"
It is possible to register a listener via Channel.NotifyClose to observe this exception.

Changing state of messages which are "in delivery"

In my application, I have a queue (HornetQ) set up on JBoss 7 AS.
I have used Spring batch to do some work once the messages is received (save values in database etc.) and then the consumer commits the JMS session.
Sometimes when there is an exception while processing the message, the excecution of consumer is aborted abruptly.
And the message remains in "in delivery" state. There are about 30 messages in this state on my production queue.
I have tried restarting the consumer but the state of these messages is not changed. The only way to remove these
messages from the queue is to restart the queue. But before doing that I want a way to read these messages so
that they can be corrected and sent to the queue again to be processed.
I have tried using QueueBrowser to read them but it does not work. I have searched a lot on Google but could not
find any way to read these messages.
I am using a Transacted session, where once the message is processed, I am calling:
session.commit();
This sends the acknowledgement.
I am implementing spring's
org.springframework.jms.listener.SessionAwareMessageListener
to recieve messages and then to process them.
While processing the messages, I am using spring batch to insert some data in database.
For a perticular case, it tries to insert data too big to be inserted in a column.
It throws an exception and transaction is aborted.
Now, I have fixed my producer and consumer not to have such data, so that this case should not happen again.
But my question is what about the 30 "in delivery" state messages that are in my production queue? I want to read them so that they can be corrected and sent to the queue again to be processed. Is there any way to read these messages? Once I know their content, I can restart the queue and submit them again (after correcting them).
Thanking you in anticipation,
Suvarna
It all depends on the Transaction mode you are using.
for instance if you use transactions:
// session here is a TX Session
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
Message msg = consumer.receive...
session.rollback(); // this will make the messages to be redelivered
if you are using non TX:
// session here is auto-ack
MessageConsumer cons = session.createConsumer(someQueue);
session.start();
// this means the message is ACKed as we receive, doing autoACK
Message msg = consumer.receive...
//however the consumer here could have a buffer from the server...
// if you are not using the consumer any longer.. close it
consumer.close(); // this will release messages on the client buffer
Alternatively you could also set consumerWindowSize=0 on the connectionFactory.
This is on 2.2.5 but it never changed on following releases:
http://docs.jboss.org/hornetq/2.2.5.Final/user-manual/en/html/flow-control.html
I"m covering all the possibilities I could think of since you're not being specific on how you are consuming. If you provide me more detail then I will be able to tell you more:
You can indeed read your messages in the queue using jmx (with for example jconsole)
In Jboss As7 you can do it the following way :
MBeans>jboss.as>messaging>default>myJmsQueue>Operations
listMessagesAsJson
[edit]
Since 2.3.0 You have a dedicated method for this specific case :
listDeliveringMessages
See https://issues.jboss.org/browse/HORNETQ-763

Resources