Tibco EMS Queue needs to be Purged when Bridged - jms

I have a Tibco EMS Message Queue on a production system that routes messages from a single producer to a single consumer application.
We are scaling the application vertically, but due to financial constraints, we cannot scale the two applications in pairs.
A single producer will route messages to the message broker and the message broker will need to bridge the message to 1 of 3 consumers based on a message selector.
I have set up a queue bridge and selector to route messages on the producer queue to consumer queues. This is a 1 to many queue bridge.
I noticed that the bridged consumer queues have consumers attached to them and they are receiving the messages correctly based on the selector, however the producer queue retains copies of the messages which must be manually purged at the end of day.
What is the best way to handle this scenario using bridges and selectors so that the message is retained on the broker until it is consumed (durable) but once a message is consumed by a consumer queue, the message is removed by the broker.

The easiest way to dispose those messages of the original queue is by introducing MaxMsgs and MaxBytes on the queue.
As for you requirement that the message on the original queue can only be disposes if it was consumed on one of the bridged queues, this is not possible.
That said, it also it not needed, since the bridged queue keep its own message copy, regardless of what happens to the message in the source queue. So expiring messages in the original queue has no effect on the already bridged messages

Related

Do messages get deleted from the queue after a read operation in IBM MQ?

I am using Nifi to get data from IBM MQ. It is working fine. My question is once the message is read from an MQ queue, does it get deleted from the queue? How to just read messages from the queue without deleting them from the queue?
My question is once the message is read from an MQ queue, does it get
deleted from the queue?
Yes, that is the default behavior.
How to just read messages from the queue without deleting them from
the queue?
You use the option: MQGMO_BROWSE_FIRST followed by MQGMO_BROWSE_NEXT on the MQGET API calls.
You can also open the queue for browse only. i.e. MQOO_BROWSE option for MQOPEN API call.
It sounds as if you would like to use a "publish/subscribe" model rather than a "point-to-point" model.
From ActiveMQ:
Topics In JMS a Topic implements publish and subscribe semantics. When
you publish a message it goes to all the subscribers who are
interested - so zero to many subscribers will receive a copy of the
message. Only subscribers who had an active subscription at the time
the broker receives the message will get a copy of the message.
Queues A JMS Queue implements load balancer semantics. A single
message will be received by exactly one consumer. If there are no
consumers available at the time the message is sent it will be kept
until a consumer is available that can process the message. If a
consumer receives a message and does not acknowledge it before closing
then the message will be redelivered to another consumer. A queue can
have many consumers with messages load balanced across the available
consumers.
If you have a queue, when a consumer consumes that message, it is removed from the queue so that the next consumer consumes the next message. With a topic, multiple consumers can be subscribed to that topic and retrieve the same message without being exclusive.
If neither of these work for you, I'm not sure what semantics you're looking for -- a "queue" which doesn't delete the message when it is consumed will never let a consumer access any but the first message.

ActiveMQ not delivering/dispatching persistent messages on queues

I am using ActiveMQ v5.10.0 and having an issue almost every weekend where my ActiveMQ instance stops delivering persistent messages sent on queues to the consumers. I have not been able to figure out what could be causing this.
While the issue was happening I tried following things:
I added a new consumer on the affected queue but it didn't receive
any messages.
I restarted the original consumer but it didn't receive any messages after the restart.
I purged the messages that were held on the queue but then messages started accumulating again and broker didn't deliver any of the new messages. When I purged the expiry count didn't increase neither the dequeue and dispatch counters.
I sent 100 non-persistent messages on the affected queue, surprisingly it received those messages.
I tried sending 100 persistent messages on that queue, it didn't deliver anyone of them, all the messages were held by broker.
I created a fresh new queue and sent 100 persistent messages and none of them was delivered to the consumer whereas all the non-persistent messages were delivered.
The same things happen if I send persistent or non-persistent messages from STOMP producers. Surprisingly all this happened only for queues, topic consumers were able to receive persistent as well as non-persistent messages.
I have already posted this on ActiveMQ user forum: http://activemq.2283324.n4.nabble.com/Broker-not-delivering-persistent-messages-to-consumer-on-queue-td4691245.html but no one from ActiveMQ has suggested anything.
The jstack output also isn't very helping.
More details:
1. I am not using any selectors, message groups feature
2. I have disabled producer flow control in my setup
I want some suggestions as to what configuration values might cause this issue- memory limits, message TTL etc.

JMS/Active MQ - broker vs consumer redelivery

As I understand (http://activemq.apache.org/message-redelivery-and-dlq-handling.html) a redelivery can be done by either a consumer or a broker.
I have some questions though:
How does the redelivery by a consumer work underneath? Does the consumer cache the message from a broker and redeliver it locally? What does happen if a consumer terminates inbetween? Such message will be lost? I think that as long as the consumer does not acknowledge the message it shouldn't be. But in such case, the message will be still available on the broker?
Are there any guidelines when to use broker vs consumer redelivery? Any recommendations?
The consumer does cache and redeliver the message to the client locally, until the redelivery count is met, then automatically acks the message as bad (posin ack). A consumer can control if it gets marked as redelivered depending on the acknowledgment mode. If for whatever reason a consumer can't or does not want to process a message, it can also kick it back and it will be available for consumption again if it closes the session.
The broker will hold onto the message until it gets an ack from the consumer. If your consumer is set to AUTO_ACKNOWLEDGE then it is possible you could lose the message if an unhandled exception occurs or the consumer ends unexpectedly for example.
Otherwise, if your consumer is using transactions or CLIENT_ACKNOWLEDGE it will give you the control on when that occurs.
With transactions, if the consumer drops prior to a commit it will be available for the next consumer or whenever that consumer reconnects.
I've always used transaction over CLIENT_ACKNOWLEDGE so I don't want to say for sure that the message will be lost in if the Session.recover() is not called before the consumer goes down or not.
From a consumer stand point, this is also known as retry logic.
Regarding broker vs consumer redelivery: By default, the broker just keeps giving the consumer the same message until the redelivery count is met. If you tell the broker not to redeliver it after a given amount of time, then your consumer can work on consuming other messages that may be able to be processed.
When to do this is really up to what is going on in your application. Maybe a specific message needs to be put to a database and that database is not available at the moment and you want to move on to messages that go elsewhere/have another purpose?

Moving consumed message from a topic to a queue

Let's suppose I have several subscribers consuming from a topic. After a message has been delivered to all the subscribers I'd like to trigger a job that would use this message in input.
So the easy way to do that would be to move messages that have been succesfully delivered to all the sucscribers to a queue from which my job would consume messages.
Is it part of JMS?
Is there any message broker able to do that directly?
If not is there a simple solution to solve this problem?
You should be able to do this using activemq's advisories.
See here for more about advisory messages: http://activemq.apache.org/advisory-message.html
So what you want to do, for the topic in question, is track:
the number of consumers
when a message is dispatched to them
when the message has been ack'd by each of the consumers
to get the number of consumers, listen to the "ActiveMQ.Advisory.Consumer.Topic." advisory topic
to get when a message is dispatched, listen to the "ActiveMQ.Advisory.MessageDelivered.Topic."
to get when a message has been ack'd, listen to "ActiveMQ.Advisory.MessageConsumed.Topic."
you could easily use Apache Camel to help out with this (listening to the topics) and aggregating whether or not all consumers have processed (ack'd) the message.. then that could kick off your further processing..
You could just create another durable subscription to route the message from the topic to queue directly. From that queue your job can consume messages. This is much easier than creating a trigger to route the messages to a queue.
So the easy way to do that would be to move messages that have been
succesfully delivered to all the sucscribers to a queue from which my
job would consume messages. Is it part of JMS?
No, this is not part of JMS specification.

Single point to point queue and multiple listeners

I have a single 'point to point' IBM MQ queue receiving messages from multiple producers. My application consumes the messages from the queue. I am using spring 'jmstemplate" and "DefaultMessageListenerContainer" to consume the messages asynchronously.
My application is running on 2 jvms, meaning there are 2 listeners active on each jvm listening to the same queue.
Coming to my questions, If a message arrives...
1) How does the listeners know that the message arrived in the queue?
2) Out of the two listeners, which one will receive the message ? What is the approach followed to distribute the messages to the listeners?
3) Can i scale to 'N' count of listeners for a singe queue? If i grow to 10 listeners, how does the scaling work? how are the messages distributed to listeners?
4) How does the MQ server make sure that the same message is not sent to multiple listeners?
May be these are simple questions, but not able to drill down on how the above scenarios works. Please share your thoughts...
That's a function of the IBM client library; the listener container simply polls the JMS API waiting for a message; by default, it uses a 1 second receive timeout; with TRACE level logging, you will see log messages showing this activity. The timeout can be modified by setting receiveTimeout on the container.
That is indeterminate from the client's perspective; the IBM broker knows how many consumers are registered and picks one. Some brokers allow configuration of a pre-fetch; this can help performance under high volume but can hurt performance under low volume.
Yes; the Spring Listener Container can dynamically scale the listeners based on load; you can configure min/max consumers and Spring will adjust within those bounds as necessary. Each listener is a separate consumer, as far as the broker is concerned so the work is distributed according to the broker's algorithms.
That's a function of the IBM broker (and part of the JMS contract).
If using transactions and a message is rolled back onto the queue; there is no guarantee that the same listener will get the re-delivered message.

Resources