Is it possible to configure the topic to store a copy of just the last message and send this to new connections without knowing client identifiers or other info?
Update:
From the info provided by Shashi I found this two pages where they describe a use case similar to mine (applied over stock prices) by using retroactive consumer and a subscription recovery policy. How ever I'm not getting the desired behaviour. What I currently do is:
Include in the activemq the folowing lines in the policyEntry for topic=">"
<subscriptionRecoveryPolicy>
<fixedCountSubscriptionRecoveryPolicy maximumSize="1"/>
</subscriptionRecoveryPolicy>
Add to the URL used to connect to the brocker (using activemq-cpp) consumer.retroactive=true.
Set the consumer has durable. (But I strongly think this is not want since I only need the last one, but without it I didn't get any message when starting the consumer for the second time)
Start up the broker.
Start the consumer.
Send a message to the topic using the activemq web admin console. (I receive it in the consumer, as expected)
Stop consumer.
Send another message to the topic.
Start consumer. I receive the message, also as expected.
However, if the consumer receives a message, then it goes offline (stop process) and then I restart it, it doesn't get the last message back.
The goal is to whenever the consumer starts get the last message, no mater what (obviously, except when there weren't messages sent to the topic).
Any ideas on what I'm missing?
Background:
I have a device which publishes his data to a topic when ever its data changes. A variable number of consumer may be connected to this topic, from 0 to less than 10. There is only one publisher in the topic and always publish all of his data as a single message (little data, just a couple of fields of a sensor reading). The publication rate of this information is variable, not necessarily time based, when something changes a new updated message is sent to the broker.
The problem is that when a new consumer connects to the topic it has no data of the device readings until a new message is send to the topic by the device. This could be solve by creating an additional queue so new connections can subscribe to the topic and then request the device for the current reading through the queue (the device would consume the queue message which would be a request for data, and then response in the same queue).
But Since the messages send to the topic are always information complete I was wondering if is it possible to configure the topic to store a copy of just the last message and send this to new connections without know client identifiers or other info?
Current broker in use is ActiveMQ.
What you want is to have retroactive consumers and to set the lastImageSubscriptionRecoveryPolicy subscription recovery policy on the topic. Shashi is correct in saying that the following syntax for setting a consumer to be retroactive works only with Openwire
topic = new ActiveMQTopic("TEST.Topic?consumer.retroactive=true");
In your case, what you can do is to configure all consumers to be retroactive in broker config with alwaysRetroactive="true". I tested that this works even for the AMQP protocol (library qpid-jms-client) and I suspect it will work for all protocols.
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic="FOO.>" alwaysRetroactive="true">
<subscriptionRecoveryPolicy>
<lastImageSubscriptionRecoveryPolicy />
</subscriptionRecoveryPolicy>
</policyEntry>
The configuration example is taken from https://github.com/apache/activemq/blob/master/activemq-unit-tests/src/test/resources/org/apache/activemq/test/retroactive/activemq-message-query.xml
Messaging providers (WebSphere MQ for example) have a feature called Retained Publication. With this feature the last published message on a topic is retained by the messaging provider and delivered to a new consumer who comes in after a message has been published on a given topic.
Retained Publication may be supported by Active MQ in it's native interface. This link talks about consumer.retroactive which is available for OpenWire only.
A publisher will tell the messaging provider to retain a publication by setting a property on the message before publishing. Below is how it is done using WebSphere MQ.
// set as a retained publication
msg.setIntProperty(JmsConstants.JMS_IBM_RETAIN, JmsConstants.RETAIN_PUBLICATION)
Related
I'am trying to understand the topology of queues and exchanges MT creates in RabbitMQ.
I cannot get these two statements:
we generate an exchange for each queue so that we can do direct sends
to the queue. it is bound as a fanout exchange (is it about sending vs publishing?)
control queues are exclusive and auto-delete - they go away when you
go away and are not shared.
What for does MT need to send direct messages? Does this relate to control queues used by MT internally?
There is also no mentioning about dead letter queue, does that imply MT does not support one out of the box?
Oops, looked in a wrong place. It's here.
Michael Aldworth has described this in details in his excellent MassTransit Send vs. Publish blog post.
Essentially, for each MT endpoint, you get a exchange-queue pair, which is named after the queueName parameter value in the endpoint configuration. Topics, at the other hand, are RabbitMQ exchanges named after the message contract full type name.
Publish
You send a message to the topic, meaning to the message type exchange. MassTransit creates binding between the message type exchange and queue-name exchange on start up. In such way subscriptions work on the RabbitMQ levels. Publisher never knows who will be receiving the published message, if anyone at all.
Send
When sending, however, you need to specify the receiver address. By doing this you instruct MassTransit to deliver the message directly to the queue-name exchange. There is no binding between message-type exchange and queue-name exchange involved here. Therefore, the message will be delivered even if there is no consumer for this type of message at the target service. In such case the message will be moved to the deadletter queue (queue-name_Skipped).
Is there any way we can know when a consumer disconnects from a queue or when a queue is deleted?
The requirement is as follows:
I'm building a system in which multiple clients can subscribe to certain events from the system. All clients create their own queue and registers themselves with the system using some sort of authentication. The system, as the events are generated, filters the events and forwards them to clients who are eligible for them.
I have implemented a POC for most part of it and it works well. An issue that I'm not able to fix is that, if a client just disconnects from the queue (due to program termination or so), the registration still exists and the system keeps trying to push messages to that client.
So we would like to be notified when a client disconnects or a queue gets deleted so that we can remove that client's registration data and no longer push messages to him.
Let your publisher utilize Confirms (aka Publisher Acknowledgements) and make client queue be exclusive and transient, so only one client at a time will be consuming from one queue and after it disconnection it will be deleted.
If you publish message that get routed to only one queue and that queue gone (assume you utilize publisher confirms and publish message with mandatory flag set) publisher will be notified that message cannot be routed with that message returned back to it, so you can stop publishing messages.
For details see How Confirms Work section in RabbitMQ blog post "Introducing Publisher Confirms" and Confirms (aka Publisher Acknowledgements) official docs.
I have set up uniform distributed queue with weblogic server 12c. I am trying to achieve order of delivery and high availability with jms distributed queue. In my prototpe testing deployment I have two managed servers in the cluster, let us say managed_server1 and managed_server2. Each of this managed server hosts jms server namely jms server1 and jms server2. I have configured the jms servers with jdbc persistent store. I have enabled server affinity.
I have a producer running such as java queuproducer t3::/managed_server1. I send out 4 messages. From the weblogic monitoring console I see there are 4 messages in the queu since there are no consumers to the queue yet.
Now I shut down managed_server1.
Bring up a consumer to listen on java queuconsumer t3://managed_server2. This consumer cannot consume message since the producer send all the messages to jms server1, and it is down.
Bring up managed server 1, start a consumer to listen to t3://managed_server1 I can get all messages.
Here is my problem say if the managed_server1 went down then there it never came back up, do i loose all my messages. Also if there is another producer sending messages to java queuproducer t3://managed_server2 then order of messages based on the time between these producers are not guanranteed.
I am a little lost, am I missing something. Can unit of order help me to overcome this. Or should I use distributed topic instead of distributed queue, where all the jms server will receive all the messages from producers, but if one jms server where my consumre is listening fails there is only one consumer in my application, when I switch over to other jms server, I might be starting to get messages from the beginning not from where I left off.
Any suggestions regarding the same will be helpful.
Good Question !
" Here is my problem say if the managed_server1 went down then there it never came back up, do i loose all my messages. "
Ans - no you do not loose all your messages, they are stored in the JDBC store configured for the JMS server deployed on managed server 1. If you want the Messages sent to managed_server1 to be consumed from managed_server2 you need to configure JMS migration.
" Also if there is another producer sending messages to java queuproducer t3://managed_server2 then order of messages based on the time between these producers are not guanranteed. Can unit of order help me to overcome this."
Ans - If you want the messages to be consumed strictly in a certain order, then you will have to make use of unit of order (UOO). when messages are sent using UOO, they are sent to one of the several UDQ destinations, if midway that destination fails, and migration is enabled the messages are migrated to the next UDQ destination and new UDQ messages are also delivered to the new destination.
Useful links -
http://www.youtube.com/watch?v=B9J7q5NbXag
http://www.youtube.com/watch?v=_W3EJ8p35lI
Hope this helps.
I am using ActiveMQ as a JMS implementation server in my application. Scenario is like, there is a topic over which I have many durable subscribers which consumes the published message and a message listener which save the data from message object to central DB server. There is a producer thread which keeps on publishing persistent message over the same topic. I am using KahaDB for persistent Message Store. As soon as a message is published, kahaDB creates a data log file in message store to persist message until all durable subscriber consume it. I want to know if at any point, I shutdown the JMS server and delete all the data log files, what would be the impact. Will it be just that few durable subscriers will not receive a message which was there in data log files for them to be consumed or is there a possibility that few message didn't got saved in central database which is done by message listener over this topic.
Any hint or help is greatly appreciated......
Thanks in advance.
If you stop and start your broker, regardless of whether you delete your data files or not, topic consumer that have not already received a published message will no longer receive it. The reason behind this is that messages sent to a topic will not be written out to the persistent message store.
Durability and persistence are not the same things. A durable subscription tells the broker to preserve the subscription state in case the subscriber disconnects - any messages sent while the consumer is disconnected will be kept around. A non-durable subscription on the other hand is finite; if a subscriber disconnects, it missed any messages sent in the interim. All messages are stored in memory, and will not survive a broker restart.
Message persistence on the other hand stores messages for eventual delivery. This guards against catastrophic failure, or for later delivery to consumers who might not yet be active.
If you want to broadcast messages using pub-sub, and have the subscriptions appear durable and survive broker restarts you should use virtual destinations instead of durable subscriptions.
No messages, persistent or non-persistent, will survive switching the broker off and deleting the data directory.
I am trying to find a message bus provider that supports Durable Subscribers and allows me to replay, in order, based on the message timestamp, all messages for a given topic. Futhermore, I would like the message bus to reset each durable consumer's checkpoint when a message arrives late. E.g.
Client subscribes to topic 1 at 2009-12-22 12:00:00
Message 1 arrives, Timestamped 2009-12-22
Client receives Message 1
Client disconnects
Message 2 arrives, Timestamped 2009-12-21 18:00:00
Client connects
Client receives Message 2, then Message 1
I would strongly prefer an open source provider. Does anyone know of a message bus provider that supports this? I've tried to read the intro documentation for ActiveMQ, Mass Transit, etc but I have to admit that I am behind the curve on message bus terminology, so a lot of it went over my head.
AMQP (implemented by RabbitMQ, et al) lets you define durable queues and attach them to the same exchange. Each client that wants to receive messages first sets up its own durable queue, which will hold messages received from the exchange even while the client is disconnected.
The only limitation of this is that clients that have never connected, and which arrive on the scene unexpectedly, cannot belatedly setup a queue and request a dump of all previous messages. AMQP 1.0 might allow such universal persistence, but I don't know the new model that well, so I can't say for sure.
you may want to look at the spring integration project.
http://www.springsource.org/spring-integration