How to ensure order preservation? - ibm-mq

We have an application that will be publishing messages to a single Topic. The messages are expected to be consumed by multiple Subscriber applications in the exact same order in which they were published.
The complication is that each subscriber will be using a different Message Selector to filter on the messages based on their properties. The filters will be such that there is no overlap among the messages read by the subscribers.
For e.g.
Time Message Property
t1 m1 red
t2 m2 blue
t3 m3 red
Assuming subscriber S1 subscribes for reading color=red and S2 subscribes for reading color=blue, we need S1 to read m1 and get blocked until S2 reads m2. Only once S2 has read m2, S1 will read m3.
Is this possible with Websphere MQ 7.0, and if so, what configuration should we use for the queue manager and what option should we use in the MQGET operation?
Thanks,
Yash

In Publish/Subscribe messaging neither the publisher is aware of subscriber nor a subscriber is aware of presence of another subscriber. I don't think any messaging provider will have a feature that you are looking for.
It may be simple for you to implement an event mechanism where S1 informs S2 of receiving message.

Related

IBM IB 9 and MQ set max number of messages in minute that may be placed in queue

So I have IIB 9 and WMQ. For example there are two local queues Q1 and Q2. From Q1 messages goes to Q2. May I somehow filter how much messages in minute may go from Q1 to Q2? I want Q1 not to place more than N messages in minute to Q2, but I also don't want to lose messages from Q1 and I don't wnat to override messages in Q2. Just maybe turn off the ability to place in Q2 from Q1 for X seconds if Q1 reached it's limit.
If you are using IBM Integration Bus to build your messages flows you can do this.
MQ Input (Q1) --> Mediation --> MQ Output (Q2)
You can create a Workload Management Policy in IBM Integration Bus 9.0.
In this kind of policy you can define the processing rate and attach this to the message flow.
Configuration

MQ AtoB and AtoC communication using a single queue?

I have just started work on a software which uses IBM MQ for some communication.
As I understand that MQ can be used for many to one communication and one to many communication.
Lets say there are 3 Business applications A, B and C. A wants to send a message using MQ to B and another message to C but A is only using one queue Queue1.
Now my question is if we can define (in MQMD or otherwise) that a certain message is only for B NOT for C, hence only B can retrieve it from Queue1 whenever B is available. If not how can we do this if it is at all possible?
One other thing is can we make a separate queue Queue2 only for A-B communication?
It is better to use separate queues. For example use queue QA2B for application A to send messages to application B and QA2C for application A to send messages to application C. This way traffic is separated out and you can administratively restrict application B from receiving message meant for C and vice-versa.
It is possible to use just one queue wherein application A while sending messages sets a message property that says something like "Message is for B" or "Message is for C". Application B uses a selector to match the property value "Message for B" while receiving messages. Similarly application C also uses selector "Message for C" and receives messages. But note if either B or C receives messages without any selector, then messages can go to wrong hands.

Load balancing Tibco EMS topic subscribers

I have a Tibco EMS topic subscriber which I need to load balance among different instances. Each published message to the topic needs to be received by one (and only one) instance of each subscriber load balance group.
Just using global topics and balanced EMS connections (tcp://localhost:7222|tcp://localhost:7224) results in the same message received by all instances of each subscriber load balance group, producing duplicates.
Do you know any alternative for load balancing topic subscribers?
You can:
A) Bridge the topic to a queue and reconfigure your subscribers to read from the queue. Queues behave differently to topics in that a message is only obtained by one subscriber rather than all.
B) Create a number of durable subscribers on the topic with selectors that divide messages between the durables. E.g. If a message has a property 'id' that is sequentially increasing:
create durable topic DURABLENAME1 selector="(id - 2 * (id / 2)) = 0"
create durable topic DURABLENAME2 selector="(id - 2 * (id / 2)) = 1"
The selector is just a modulo so half the messages will go on one durable, and half on the other.
With EMS 8.0 new concept shared subscription is added with these only one subscription receives messages with the same subscription name go through the EMS user guide docs it may help you.
While both previous answers are valid, however the most natural approach would be to not use topics at all.
Using queues instead pf topics does the whole job (loadbalancing in round robin fashion).

JMS Topic vs Queues

I was wondering what is the difference between a JMS Queue and JMS Topic.
ActiveMQ page says
Topics
In JMS a Topic implements publish and subscribe semantics. When you publish a message it goes to all the subscribers who are
interested - so zero to many subscribers will receive a copy of the
message. Only subscribers who had an active subscription at the time
the broker receives the message will get a copy of the message.
Queues
A JMS Queue implements load balancer semantics. A single message will be received by exactly one consumer. If there are no
consumers available at the time the message is sent it will be kept
until a consumer is available that can process the message. If a
consumer receives a message and does not acknowledge it before closing
then the message will be redelivered to another consumer. A queue can
have many consumers with messages load balanced across the available
consumers.
I want to have 'something' what will send a copy of the message to each subscriber in the same sequence as that in which the message was received by the ActiveMQ broker.
Any thoughts?
That means a topic is appropriate. A queue means a message goes to one and only one possible subscriber. A topic goes to each and every subscriber.
It is simple as that:
Queues = Insert > Withdraw (send to single subscriber) 1:1
Topics = Insert > Broadcast (send to all subscribers) 1:n
Topics are for the publisher-subscriber model, while queues are for point-to-point.
A JMS topic is the type of destination in a 1-to-many model of distribution.
The same published message is received by all consuming subscribers. You can also call this the 'broadcast' model. You can think of a topic as the equivalent of a Subject in an Observer design pattern for distributed computing. Some JMS providers efficiently choose to implement this as UDP instead of TCP. For topic's the message delivery is 'fire-and-forget' - if no one listens, the message just disappears. If that's not what you want, you can use 'durable subscriptions'.
A JMS queue is a 1-to-1 destination of messages. The message is received by only one of the consuming receivers (please note: consistently using subscribers for 'topic client's and receivers for queue client's avoids confusion). Messages sent to a queue are stored on disk or memory until someone picks it up or it expires. So queues (and durable subscriptions) need some active storage management, you need to think about slow consumers.
In most environments, I would argue, topics are the better choice because you can always add additional components without having to change the architecture. Added components could be monitoring, logging, analytics, etc.
You never know at the beginning of the project what the requirements will be like in 1 year, 5 years, 10 years. Change is inevitable, embrace it :-)
Queues
Pros
Simple messaging pattern with a transparent communication flow
Messages can be recovered by putting them back on the queue
Cons
Only one consumer can get the message
Implies a coupling between producer and consumer as it’s an one-to-one relation
Topics
Pros
Multiple consumers can get a message
Decoupling between producer and consumers (publish-and-subscribe pattern)
Cons
More complicated communication flow
A message cannot be recovered for a single listener
As for the order preservation, see this ActiveMQ page. In short: order is preserved for single consumers, but with multiple consumers order of delivery is not guaranteed.
If you have N consumers then:
JMS Topics deliver messages to N of N
JMS Queues deliver messages to 1 of N
You said you are "looking to have a 'thing' that will send a copy of the message to each subscriber in the same sequence as that in which the message was received by the ActiveMQ broker."
So you want to use a Topic in order that all N subscribers get a copy of the message.
TOPIC:: topic is one to many communication... (multipoint or publish/subscribe)
EX:-imagine a publisher publishes the movie in the youtub then all its subscribers will gets notification....
QUEVE::queve is one-to-one communication ...
Ex:-When publish a request for recharge it will go to only one qreciever ...
always remember if request goto all qreceivers then multiple recharge happened so while developing analyze which is fit for a application
Queue is JMS managed object used for holding messages waiting for subscribers to consume. When all subscribers consumed the message , message will be removed from queue.
Topic is that all subscribers to a topic receive the same message when the message is published.

activemessaging with stomp and activemq.prefetchSize=1

I have a situation where I have a single activemq broker with 2 queues, Q1 and Q2. I have two ruby-based consumers using activemessaging. Let's call them C1 and C2. Both consumers subscribe to each queue. I'm setting activemq.prefetchSize=1 when subscribing to each queue. I'm also setting ack=client.
Consider the following sequence of events:
1) A message that triggers a long-running job is published to queue Q1. Call this M1.
2) M1 is dispatched to consumer C1, kicking off a long operation.
3) Two messages that trigger short jobs are published to queue Q2. Call these M2 and M3.
4) M2 is dispatched to C2 which quickly runs the short job.
5) M3 is dispatched to C1, even though C1 is still running M1. It's able to dispatch to C1 because prefetchSize=1 is set on the queue subscription, not on the connection. So the fact that a Q1 message has already been dispatched doesn't stop one Q2 message from being dispatched.
Since activemessaging consumers are single-threaded, the net result is that M3 sits and waits on C1 for a long time until C1 finishes processing M1. So, M3 is not processed for a long time, despite the fact that consumer C2 is sitting idle (since it quickly finishes with message M2).
Essentially, whenever a long Q1 job is run and then a whole bunch of short Q2 jobs are created, exactly one of the short Q2 jobs gets stuck on a consumer waiting for the long Q1 job to finish.
Is there a way to set prefetchSize at the connection level rather than at the subscription level? I really don't want any messages dispatched to C1 while it is processing M1. The other alternative is that I could create a consumer dedicated to processing Q1 and then have other consumers dedicated to processing Q2. But, I'd rather not do that since Q1 messages are infrequent--Q1's dedicated consumers would sit idle most of the day tying up memory.
The activemq.prefetchSize is only available on a SUBSCRIBE message, not a CONNECT, according to the ActiveMQ docs for their extended stomp headers (http://activemq.apache.org/stomp.html). Here is the relevant info:
verb: SUBSCRIBE
header: activemq.prefetchSize
type: int
description: Specifies the maximum
number of pending messages that will
be dispatched to the client. Once this
maximum is reached no more messages
are dispatched until the client
acknowledges a message. Set to 1 for
very fair distribution of messages
across consumers where processing
messages can be slow.
My reading and experience with this, is that since M1 has not been ack'd (b/c you have client ack turned on), that this M1 should be the 1 message allowed by prefetchSize=1 set on the subscription. I am surprised to hear that it didn't work, but perhaps I need to run a more detailed test. Your settings should be correct for the behavior you want.
I have heard of flakiness from others about the activemq dispatch, so it is possible this is a bug with the version you are using.
One suggestion I would have is to either sniff the network traffic to see if the M1 is getting ack'd for some reason, or throw some puts statements into the ruby stomp gem to watch the communication (this is what I usually end up doing when debugging stomp problems).
If I get a chance to try this out, I'll update my comment with my own results.
One suggestion: It is very possible that multiple long processing messages could be sent, and if the number of long processing messages exceeds your number of processes, you'll be in this fix where quick processing messages are waiting.
I tend to have at least one dedicated process that just does quick jobs, or to put it another way, dedicate a set # of processes that just do longer jobs. Having all poller consumer processes listen to both long and short can end up with sub-optimal results no matter what dispatch does. Process groups are the way to configure a consumer to listen to a subset of destinations: http://code.google.com/p/activemessaging/wiki/Configuration
processor_group name,
*list_of_processors
A processor group is a way to run the poller to only execute a subset of
the processors by passing the name of
the group in the poller command line
arguments.
You specify the name of the processor as its underscored lowercase
version. So if you have a
FooBarProcessor and BarFooProcessor in
a processor group, it would look like
this:
ActiveMessaging::Gateway.define do |s|
...
s.processor_group :my_group, :foo_bar_processor, :bar_foo_processor
end
The processor group is passed into the poller like the following:
./script/poller start -- process-group=my_group
I'm not sure if ActiveMessaging supports this, but you could unsubscribe your other consumers when the long processing message arrives and then re-subscribe them after it get processed.
It should give you the desired effect.

Resources