I am integrating several .Net modules using pub/sub messaging using RabbitMQ and MassTransit. Its important not to loose any message. When a module publishes a message it is supposed to be fire and forget action. In the moment the broker takes responsibility the publisher can fail and message will be delivered even much later.
I am trying to achieve the same behavior even when connection to broker is lost. I imagine a local persistent queue that will store the message(s) till they can be successfully delivered to the broker. The module can continue its operation meanwhile. If the module fails before it is able to deliver messages to the broker they stay stored locally. Once the module starts again and the connection to the broker is available, the stored messages are published to the broker. I am looking for an in-proc solution.
Does MassTransit support such a scenario out-of-the-box? Is there some 3rd party MT plugin that can do this?
Related
I'm using a web-socket protocol in my spring boot application. There are multiple pods used, to handle heavy traffic. Now, having multiple pods is causing an issue. Let me brief it a bit,
Let's assume there are 2 pods (Pod 1, Pod 2). Angular UI is subscribing to spring boot application on the web-socket protocol, let's say via Pod 1. Now, the spring boot application sends a message to the UI, let's say its send via Pod 2, and this message is getting dropped (never reach the UI) since the web-socket connection was established via Pod 1.
Because of this, some messages are getting dropped, which are being sent to UI by other Pods (which were not used for the initial subscription process), and messages send via Pod which was used initially for subscription, only those messages are received at UI.
How to tackle this scenario, so that every message is send to UI in this multiple pods environment?
The solution to multiple pod issues is by using an external message broker (like RabbitMq, ActiveMq), instead of an in-memory message broker (default behavior).
You may face the below issues while implementing this (writing them down in one place so that you don't have to struggle much as I did 🙂),
Creating Auto-Delete Queues
When using external message broker, you might observe that the queues are created for every websocket connection, but they are not deleted when the websocket connection is over. We don't even need these queues. Hence come the need of Auto-Delete Queues. These auto-queues are automatically deleted when websocket connection is closed. How to declare auto-delete queues, its easy peasy
When using user destinations with an external message broker, check the broker documentation on how to manage inactive queues, so that when the user session is over, all unique user queues are removed. For example, RabbitMQ creates auto-delete queues when destinations like /exchange/amq.direct/position-updates are used. So in that case the client could subscribe to /user/exchange/amq.direct/position-updates. Similarly, ActiveMQ has configuration options for purging inactive destinations.
In simple terms, websocket client and websocket server should use /exchange/amq.direct/<anything> this exchange destination.
For more info, read the official docs
ssl/stomp protocol on Cloud instance
Another issue you might face when you are hosting you application to AWS or Azure or Google Cloud, is that they use ssl/stomp protocol, so you code which works fine in your local machine (since it uses stomp protocol) doesn't work fine in Cloud.
Broadcasting message from one pod to other pods
This issue is the same as written in this Stackoverflow question. [refer the question for clearance]
Now, lemme put up the code snippet and will add comments to indicate which part of snippet fixes which issue. Add it inside your configureMessageBroker method,
val tcpClient = new ReactorNettyTcpClient<>
(TcpClient.create()
.port(yourRabbitmqCloudStompPort)
.host(yourRabbitmqCloudHost)
.secure(SslProvider.defaultClientProvider()),
new StompReactorNettyCodec());
messageBrokerRegistry
// enables stompbroker, instead of in-memory broker
.enableStompBroker("/queue", "/topic", "/exchange")
.setClientLogin(yourRabbitmqCloudClientLogin)
.setClientPasscode(yourRabbitmqCloudClientPasscode)
.setSystemLogin(yourRabbitmqCloudSystemLogin)
.setSystemPasscode(yourRabbitmqCloudSystemPasscode)
// broadcast msg to every pod
.setUserDestinationBroadcast("/topic/unresolved-user-destination")
.setUserRegistryBroadcasr("/topic/user-registry")
// enables ssl/stomp protocol
.setTcpClient(tcpClient);
Is there any way we can know when a consumer disconnects from a queue or when a queue is deleted?
The requirement is as follows:
I'm building a system in which multiple clients can subscribe to certain events from the system. All clients create their own queue and registers themselves with the system using some sort of authentication. The system, as the events are generated, filters the events and forwards them to clients who are eligible for them.
I have implemented a POC for most part of it and it works well. An issue that I'm not able to fix is that, if a client just disconnects from the queue (due to program termination or so), the registration still exists and the system keeps trying to push messages to that client.
So we would like to be notified when a client disconnects or a queue gets deleted so that we can remove that client's registration data and no longer push messages to him.
Let your publisher utilize Confirms (aka Publisher Acknowledgements) and make client queue be exclusive and transient, so only one client at a time will be consuming from one queue and after it disconnection it will be deleted.
If you publish message that get routed to only one queue and that queue gone (assume you utilize publisher confirms and publish message with mandatory flag set) publisher will be notified that message cannot be routed with that message returned back to it, so you can stop publishing messages.
For details see How Confirms Work section in RabbitMQ blog post "Introducing Publisher Confirms" and Confirms (aka Publisher Acknowledgements) official docs.
Is it possible to configure the topic to store a copy of just the last message and send this to new connections without knowing client identifiers or other info?
Update:
From the info provided by Shashi I found this two pages where they describe a use case similar to mine (applied over stock prices) by using retroactive consumer and a subscription recovery policy. How ever I'm not getting the desired behaviour. What I currently do is:
Include in the activemq the folowing lines in the policyEntry for topic=">"
<subscriptionRecoveryPolicy>
<fixedCountSubscriptionRecoveryPolicy maximumSize="1"/>
</subscriptionRecoveryPolicy>
Add to the URL used to connect to the brocker (using activemq-cpp) consumer.retroactive=true.
Set the consumer has durable. (But I strongly think this is not want since I only need the last one, but without it I didn't get any message when starting the consumer for the second time)
Start up the broker.
Start the consumer.
Send a message to the topic using the activemq web admin console. (I receive it in the consumer, as expected)
Stop consumer.
Send another message to the topic.
Start consumer. I receive the message, also as expected.
However, if the consumer receives a message, then it goes offline (stop process) and then I restart it, it doesn't get the last message back.
The goal is to whenever the consumer starts get the last message, no mater what (obviously, except when there weren't messages sent to the topic).
Any ideas on what I'm missing?
Background:
I have a device which publishes his data to a topic when ever its data changes. A variable number of consumer may be connected to this topic, from 0 to less than 10. There is only one publisher in the topic and always publish all of his data as a single message (little data, just a couple of fields of a sensor reading). The publication rate of this information is variable, not necessarily time based, when something changes a new updated message is sent to the broker.
The problem is that when a new consumer connects to the topic it has no data of the device readings until a new message is send to the topic by the device. This could be solve by creating an additional queue so new connections can subscribe to the topic and then request the device for the current reading through the queue (the device would consume the queue message which would be a request for data, and then response in the same queue).
But Since the messages send to the topic are always information complete I was wondering if is it possible to configure the topic to store a copy of just the last message and send this to new connections without know client identifiers or other info?
Current broker in use is ActiveMQ.
What you want is to have retroactive consumers and to set the lastImageSubscriptionRecoveryPolicy subscription recovery policy on the topic. Shashi is correct in saying that the following syntax for setting a consumer to be retroactive works only with Openwire
topic = new ActiveMQTopic("TEST.Topic?consumer.retroactive=true");
In your case, what you can do is to configure all consumers to be retroactive in broker config with alwaysRetroactive="true". I tested that this works even for the AMQP protocol (library qpid-jms-client) and I suspect it will work for all protocols.
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic="FOO.>" alwaysRetroactive="true">
<subscriptionRecoveryPolicy>
<lastImageSubscriptionRecoveryPolicy />
</subscriptionRecoveryPolicy>
</policyEntry>
The configuration example is taken from https://github.com/apache/activemq/blob/master/activemq-unit-tests/src/test/resources/org/apache/activemq/test/retroactive/activemq-message-query.xml
Messaging providers (WebSphere MQ for example) have a feature called Retained Publication. With this feature the last published message on a topic is retained by the messaging provider and delivered to a new consumer who comes in after a message has been published on a given topic.
Retained Publication may be supported by Active MQ in it's native interface. This link talks about consumer.retroactive which is available for OpenWire only.
A publisher will tell the messaging provider to retain a publication by setting a property on the message before publishing. Below is how it is done using WebSphere MQ.
// set as a retained publication
msg.setIntProperty(JmsConstants.JMS_IBM_RETAIN, JmsConstants.RETAIN_PUBLICATION)
I have set up uniform distributed queue with weblogic server 12c. I am trying to achieve order of delivery and high availability with jms distributed queue. In my prototpe testing deployment I have two managed servers in the cluster, let us say managed_server1 and managed_server2. Each of this managed server hosts jms server namely jms server1 and jms server2. I have configured the jms servers with jdbc persistent store. I have enabled server affinity.
I have a producer running such as java queuproducer t3::/managed_server1. I send out 4 messages. From the weblogic monitoring console I see there are 4 messages in the queu since there are no consumers to the queue yet.
Now I shut down managed_server1.
Bring up a consumer to listen on java queuconsumer t3://managed_server2. This consumer cannot consume message since the producer send all the messages to jms server1, and it is down.
Bring up managed server 1, start a consumer to listen to t3://managed_server1 I can get all messages.
Here is my problem say if the managed_server1 went down then there it never came back up, do i loose all my messages. Also if there is another producer sending messages to java queuproducer t3://managed_server2 then order of messages based on the time between these producers are not guanranteed.
I am a little lost, am I missing something. Can unit of order help me to overcome this. Or should I use distributed topic instead of distributed queue, where all the jms server will receive all the messages from producers, but if one jms server where my consumre is listening fails there is only one consumer in my application, when I switch over to other jms server, I might be starting to get messages from the beginning not from where I left off.
Any suggestions regarding the same will be helpful.
Good Question !
" Here is my problem say if the managed_server1 went down then there it never came back up, do i loose all my messages. "
Ans - no you do not loose all your messages, they are stored in the JDBC store configured for the JMS server deployed on managed server 1. If you want the Messages sent to managed_server1 to be consumed from managed_server2 you need to configure JMS migration.
" Also if there is another producer sending messages to java queuproducer t3://managed_server2 then order of messages based on the time between these producers are not guanranteed. Can unit of order help me to overcome this."
Ans - If you want the messages to be consumed strictly in a certain order, then you will have to make use of unit of order (UOO). when messages are sent using UOO, they are sent to one of the several UDQ destinations, if midway that destination fails, and migration is enabled the messages are migrated to the next UDQ destination and new UDQ messages are also delivered to the new destination.
Useful links -
http://www.youtube.com/watch?v=B9J7q5NbXag
http://www.youtube.com/watch?v=_W3EJ8p35lI
Hope this helps.
How can you create a durable architecture environment using MQ Client and server if the clients don't allow you to persist messages nor do they allow for assured delivery?
Just trying to figure out how you can build a salable / durable architecture if the clients don't appear to contain any of the necessary components required to persist data.
Thanks,
S
Middleware messaging was born of the need to persist data locally to mitigate the effects of failures of the remote node or of the network. The idea at the time was that the queue manager was installed locally on the box where the application lives and was treated as part of the transport stack. For instance you might install TCP and WMQ as a transport and some apps would use TCP while others used WMQ.
In the intervening 20 years, the original problems that led to the creation of MQSeries (Now WebSphere MQ) have largely been solved. The networks have improved by several nines of availability and high availability hardware and software clustering have provided options to keep the different components available 24x7.
So the practices in widespread use today to address your question follow two basic approaches. Either make the components highly available so that the client can always find a messaging server, or put a QMgr where the application lives in order to provide local queueing.
The default operation of MQ is that when a message is sent (MQPUT or in JMS terms producer.send), the application does not get a response back on the MQPUT call until the message has reached a queue on a queue manager. i.e. MQPUT is a synchronous call, and if you get a completion code of OK, that means that the queue manager to which the client application is connected has received the message successfully. It may not yet have reached its ultimate destination, but it has reached the protection of an MQ Server, and therefore you can rely on MQ to look after the message and forward it on to where it needs to get to.
Whether client connected, or locally bound to the queue manager, applications sending messages are responsible for their data until an MQPUT call returns successfully. Similarly, receiving applications are responsible for their data once they get it from a successful MQGET (or JMS consumer.receive) call.
There are multiple levels of message protection are available.
If you are using non-persistent messages and asynchronous PUTs, then you are effectively saying it doesn't matter too much whether the messages reach their destination (although they generally will).
If you want MQ to really look after your messages, use synchronous PUTs as described above, persistent messages, and perform your PUTs and GETs within transactions (aka syncpoint) so you have full application control over the commit points.
If you have very unreliable networks such that you expect to regularly fail to get the messages to a server, and expect to need regular retries such that you need client-side message protection, one option you could investigate is MQ Telemetry (e.g. in WebSphere MQ V7.1) which is designed for low bandwidth and/or unreliable network communications, as a route into the wider MQ.