I’m using Masstransit request/response pattern. So I’ve a requester application and a consumer application, which is working very well. I didn’t configure any retry/redelivery as if there is any error happen into consumer , the requester will handle it or might be send another request. So far so good.
But if the consumer application crash and restart in the middle of the process , the consumer try to take the message from queue and start reprocessing it, which is not intended for my case. Because the requester will get error response (or timeout) when the consumer application crashed. I know that MessageRetry in MassTransit is entirely in-memory.
My question is, can we somehow stop consumer to reprocess message on application restart ? OR we need to remove the pending message from service bus queues?
There is no connection between a message sent by the request client and the request client. Using the request timeout as the default value, MassTransit sets the TimeToLive of the message to that same value. The transport should remove that message once the TimeToLive has expired.
If the consumer application crashes consuming a message, that message will remain on the queue. If that message repeatedly causes your application to crash, you could check the Redelivered property that is on the ReceiveContext (a property on ConsumeContext) and possibly handle that message another way if you believe the message is causing the process to crash.
Of course, the real solution is to fix the consumer so it doesn't crash the process...
UPDATE
You could configure the receive endpoint with a DeliveryCount of 1 if you want Azure Service Bus to move the message to the dead-letter queue if the consuming process crashes.
Related
I have an application using jms that sends data to an ActiveMQ Artemis queue. I got an exception with this message:
The transaction was rolled back on failover however commit may have been successful
This exception is basically telling me that the message may or may not have reached the queue so I don't know if I need to send the message again. Whats the best way to handle an exception like this when:
I cannot send duplicate messages to applications on the other end of the queue.
and
I cannot skip a message.
I can't state it better than the ActiveMQ Artemis documentation:
When sending messages from a client to a server, or indeed from a server to another server, if the target server or connection fails sometime after sending the message, but before the sender receives a response that the send (or commit) was processed successfully then the sender cannot know for sure if the message was sent successfully to the address.
If the target server or connection failed after the send was received and processed but before the response was sent back then the message will have been sent to the address successfully, but if the target server or connection failed before the send was received and finished processing then it will not have been sent to the address successfully. From the senders point of view it's not possible to distinguish these two cases.
When the server recovers this leaves the client in a difficult situation. It knows the target server failed, but it does not know if the last message reached its destination ok. If it decides to resend the last message, then that could result in a duplicate message being sent to the address. If each message was an order or a trade then this could result in the order being fulfilled twice or the trade being double booked. This is clearly not a desirable situation.
Sending the message(s) in a transaction does not help out either. If the server or connection fails while the transaction commit is being processed it is also indeterminate whether the transaction was successfully committed or not!
To solve these issues Apache ActiveMQ Artemis provides automatic duplicate messages detection for messages sent to addresses.
See more details about how to configure and use duplicate detection in the ActiveMQ Artemis documentation.
Is there any way we can know when a consumer disconnects from a queue or when a queue is deleted?
The requirement is as follows:
I'm building a system in which multiple clients can subscribe to certain events from the system. All clients create their own queue and registers themselves with the system using some sort of authentication. The system, as the events are generated, filters the events and forwards them to clients who are eligible for them.
I have implemented a POC for most part of it and it works well. An issue that I'm not able to fix is that, if a client just disconnects from the queue (due to program termination or so), the registration still exists and the system keeps trying to push messages to that client.
So we would like to be notified when a client disconnects or a queue gets deleted so that we can remove that client's registration data and no longer push messages to him.
Let your publisher utilize Confirms (aka Publisher Acknowledgements) and make client queue be exclusive and transient, so only one client at a time will be consuming from one queue and after it disconnection it will be deleted.
If you publish message that get routed to only one queue and that queue gone (assume you utilize publisher confirms and publish message with mandatory flag set) publisher will be notified that message cannot be routed with that message returned back to it, so you can stop publishing messages.
For details see How Confirms Work section in RabbitMQ blog post "Introducing Publisher Confirms" and Confirms (aka Publisher Acknowledgements) official docs.
i've a question about the DEADQ in MQ. I know that DEADQ has been used when the msg cannot be delived into the target queue, for example queue full or put inhibited like this. But if the client applicatoin connects to QMGR through SVRCONN channel and that target queue is full right now, will the msg sent by client application go to DEADQ or just that put operation would return with failure saying queue full?
if it works as the latter, does it mean the DEADQ not being used for client-server mode, like through SVRCONN channel?
Thanks
The DLQ is used by QMgr-to-QMgr channels because at that point the message has been entrusted to WMQ to deliver and/or persist as necessary. There is no option to directly notify the sending application in the API call that something went wrong. Best the QMgr can do is to send back a report message, if the app has requested it, and if the app specified a reply-to queue.
When an application is connected over a client channel the QMgr can tell the application during the PUT API call "hey, the destination queue is full," and let the application decide the appropriate course of action. Usually that "correct action" does NOT include putting the message onto the DLQ. In fact, routing this type of message to a secondary queue is VERY unusual. Conventional wisdom says it's better for the app to raise an alert than to provision overflow queues.
The one exception to this is when using JMS classes. These will move a poison message (one where the backout count exceeds the BOTHRESH value on the queue) to a backout queue. If the backout queue is not specified or the app is not authorized to it or if it is full, the app will then try to use the DLQ. Again, allowing an app to directly put messages to the DLQ is very unusual and not considered good practice, but it can be done.
I am using ActiveMQ as a JMS implementation server in my application. Scenario is like, there is a topic over which I have many durable subscribers which consumes the published message and a message listener which save the data from message object to central DB server. There is a producer thread which keeps on publishing persistent message over the same topic. I am using KahaDB for persistent Message Store. As soon as a message is published, kahaDB creates a data log file in message store to persist message until all durable subscriber consume it. I want to know if at any point, I shutdown the JMS server and delete all the data log files, what would be the impact. Will it be just that few durable subscriers will not receive a message which was there in data log files for them to be consumed or is there a possibility that few message didn't got saved in central database which is done by message listener over this topic.
Any hint or help is greatly appreciated......
Thanks in advance.
If you stop and start your broker, regardless of whether you delete your data files or not, topic consumer that have not already received a published message will no longer receive it. The reason behind this is that messages sent to a topic will not be written out to the persistent message store.
Durability and persistence are not the same things. A durable subscription tells the broker to preserve the subscription state in case the subscriber disconnects - any messages sent while the consumer is disconnected will be kept around. A non-durable subscription on the other hand is finite; if a subscriber disconnects, it missed any messages sent in the interim. All messages are stored in memory, and will not survive a broker restart.
Message persistence on the other hand stores messages for eventual delivery. This guards against catastrophic failure, or for later delivery to consumers who might not yet be active.
If you want to broadcast messages using pub-sub, and have the subscriptions appear durable and survive broker restarts you should use virtual destinations instead of durable subscriptions.
No messages, persistent or non-persistent, will survive switching the broker off and deleting the data directory.
I'm dealing with a standalone MQ JMS application, our app need to "aware" that client already consumed the message producer put on the queue. Because client app is not responsible by us. So we cannot let them to write something like "msg.acknowledge();" thing on their side (msg.acknowledge() is not the right approach on my condition.). I search the history answer in the stackoverflow. Find following is quite the same what I want:
https://stackoverflow.com/questions/6521117/how-to-guarantee-delivery-of-the-message-in-jms
Do the JMS spec or the various implementations support delivery confirmation of messages?
My question is, is there any other way to archive this in the MQ API or JMS API? I need to do the coding only on the msg produce side, it is can be queue or topic.
Another question is in the JMS the acknowledge mode CLIENT_ACKNOWLEDGE, is that produce irrelevant? I always believe that this mode can block the application when call send() method until the client consume the message and call the msg.acknowledge(), but seems not like that. The produce just exit the app after message be delivered, and the message just store in the queue until client call the acknowledge(). Is that possible let the producer app hang there wait until the message be acknowledged by the client?
If my concept is not right, just correct me, thanks.
The main intention of message queuing is to decouple producer and consumer. Producer does not need to wait for the message to be consumed by the consumer, it can continue it's job. Ideally if producer needs to know if the message has been processed by consumer or not, it should wait for consumer to send a response message on another queue.
Message acknowledgement has nothing to do with producer. Message acknowledgement is the way a consumer tells the messaging provider to remove the message from a queue after the message has been delivered to an application.
There is auto acknowledge where the JMS providers (like MQ JMS), after delivering message to an application, tell the messaging provider to remove the message from queue. Then there is client acknowledge where, after receiving a message, the application explicitly tells the messaging provider to remove message from a queue.
Is there is a reason why the producer has to wait for consumer to receive the message? One way, though not elegant, could be: Once the message is sent, use the message id of the sent message and try to browse for that message. If message is not found, you can assume it has been consumed