WebSphere MQ Cluster QMGR, Mechanism of dispatch messages to nodes - ibm-mq

I am using WAS MQ 7.0 and there is my scenario;
I have a Cluster Queue Manager which name 'CLUSD' and two nodes for clustering which names 'N1' , 'N2'.
N1 and N2 configurations are as the same which means there is no priority set for each queue.
When I tried to send messages to CLUSD, the QMGR tried to send messages to their nodes (N1, N2); but there is no undestandable mechanism that why sometimes N1 is get more messages than N2 and vise versa.
I have a message producer which send messages in a while loop for couple of minutes. After each minute I get count of enqueue for each nodes queue; obviously always there is different between count of N1 and N2.
I know when I tried to use WAS MQ, always I have bigger fish to fry ;) but I want to get same result when there is same configuration such as software, hardware and etc.
What can I do for cover this.

As documented here http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp?topic=/com.ibm.mq.csqzah.doc/qc10940_.htm:
The distribution of user messages is not always exact, because administration and maintenance of the cluster causes messages to flow across channels. The result is an uneven distribution of user messages which can take some time to stabilize. Because of the admixture of administration and user messages, place no reliance on the exact distribution of messages during workload balancing.
This blog describes more:
https://www.ibm.com/developerworks/community/blogs/aimsupport/entry/websphere_mq_clustering_workload_balancing_dick_hamilton14?lang=en

Related

JMS with mandatory scalability (Active-Active-...-Active) and ordering?

I'm looking for a JMS provider that must have these additional characteristics:
Be multi brokers, where all brokers must be Active (no single point of failure)
Scalability on only two machines, would be sufficient for our needs
Be able to garantee ordering (if 1 producer + 1 consumer)
We have tried ActiveMQ 5.14, which seemed to be ok for both our requirements, but only when considered separately:
"ActiveMQ: To provide massive scalability of a large messaging fabric you typically want to allow many brokers to be connected together into a network so that you can have as many clients as you wish all logically connected together - and running as many message brokers as you need based on your number of clients and network topology. ... If you are using client/server or hub/spoke style topology then the broker you connect to becomes a single point of failure which is another reason for wanting a network (or cluster) of brokers so that you can survive failure of any particular broker, machine or subnet"
"Ordering: Total message ordering is not preserved with networks of brokers. Total ordering works with a single consumer but a networkBridge introduces a second consumer. In addition, network bridge consumers forward messages via producer.send(..), so they go from the head of the queue on the forwarding broker to the tail of the queue on the target. If single consumer moves between networked brokers, total order may be preserved if all messages always follow the consumer but this can be difficult to guarantee with large message backlogs."
Use Kafka, next generation distributed messaging as it is easy to scale out, offers high throughput, can persist messages to disk and ensure orderliness.
With kafka you can increase number of nodes to arrest node failure. If you can't remove JMS transfer messages as shown
JMS Producer(s) -> Kafka Cluster -> JMS Subscriber (s)
See Connection between Apache Kafka and JMS.

JMS Priority Messages Causing Starvation of Lower Priority Message

I have a queue that is loaded with high priority JMS messages throughout the day, I want to get them out the door quickly. The queue is also being loaded periodically with lower priority messages in large batches. The problem that I see on busy days, is that there are always enough high priority messages at the front of the queue that none of the lower priority messages get selected until that volume drops off. Often they will sit on the queue until they middle of the night. The app is distributed over a number of servers, but the CPUs are not even breathing hard, the JMS seems to be the choak point.
My hunch is to implement some sort of aging algorithm that increases priority for messages that have been on the queue for a very long time, but of course, that is what middleware is supposed to do for me. I can't imagine that the JMS provider (IBM WebsphereMQ) or the application server (TIBCO BusinessWorks) doesn't have some sort of facility to cope with this. So before I go write some code, I thought I would ask, is there any way to get either of these technologies to help me out with this problem?
The BusinessWorks activity that is reading the queue is a JMS SOAP Event Source, but I could turn it into a JMS Queue Receiver activity or whatever.
All thoughts on how to solve this are welcome :-) TIA
That's like tying 1 hand behind your back and then complaining that you cannot swim properly. D'oh! First off, who's bright idea was it to mix messages. Just because you can do something does not mean you should.
The app is distributed over a number of servers, but the CPUs are not
even breathing hard, the JMS seems to be the choak point.
Well then, the solution is easy. Put high priority messages into queue "A" (the existing queue) and low priority messages into a new queue "B". Next, startup another instance of your JMS application to read the messages off queue "B".
Also, JMS is probably not the choke-point. It is what the application is doing with the message data after the JMS layer picks up the message that is taking a long time (i.e. backend work).
Finally, how many instances of your JMS application is running against the existing queue? If you are only running 1 instance, why? If you have lots of CPU capacity then why don't you run 10 instances of your JMS application. Do some true parallel processing of messages.
If you really want to keep you messages mixed on the same queue and have the high priority messages processed first, and yet your volume of messages is such that you cannot work through all the volume sometimes until the middle of the night, then you quite simply do not have enough processing applications. MQ is a parallel processing system, it is designed to allow many applications to put or get from a queue at once. Make use of this by running more of your getting applications at the same time. They will work through your high priority messages quicker and then get back to processing the lower priority ones.
From your description it's clear that you want the high priority messages to processed first. In such a case lower priority messages will have to wait.
MQ will not increase the priority of messages if they are sitting in queue for long time. How will it know that it has to change property of a message :)?. You will need to develop an application to do that.
I would think segregating messages based on priority, for example, high priority messages are put to one queue and lower priority messages to another queue could be one option you could look at.
Second option would be to look at the changing the delivery sequence (MSGDLVSQ) to FIFO. This makes to messages to be delivered to consumers in the order they arrived into queue. But note this will ignore the message priority, meaning if there is a lower priority message followed by a higher priority message, then higher priority message will wait till the lower priority message is delivered.

Websphere MQ- How to find the total number of messages that pass througha queue manager

Is there a way to find the total number of messages that pass through an IBM websphere MQ queue manager over a specific period of time?
This sounds like a perfect use of the MQ Accounting and Statistics feature. Among other things, these features record the number of messages (with a persistent and non-persistent count) and also number of bytes (not all messages are the same size).
You can turn on Accounting and Statistics for just a selection of queues and/or channels or for everything.
Further Reading:
Accounting and statistics messages
Turning on Queue Accounting
Turning on Queue Statistics
Supplied tool to view the output

Exchanges and Message priorities

I am trying to implement the following,
Messages arrive at the Message broker with message priorities
They find their ways into various queues based on their message priority
So Q1 has messages with priority 1
Q2 has messages with priority 2 and so on ..
Is there a way to make the Message Broker process Q1 faster than the others.
Would it be possible to have a priority between queues ?
Q1 has higher priority to be processed than Q2 or better still processing of Q1 blocks other queues from being processed ?
Can an exchange itself be a priority queue that in turn feeds the other Queues ?
I saw that it is possible to extend the default exchanges via plugins, is there anything out there that already implements this above requirement that I have ?
Is this something feasible ? Or is this against the basic philosophy of a message broker ?
Is there any link to best practices while using prioritized messages ?
I did post this message on the Qpid nabble forum on August 28 - but 'This post has NOT been accepted by the mailing list yet'.
Thank you for your time.
In qpid you can define a queue as a "priority queue".
session.createQueue(queueName;{create:always, node:{type:queue,
x-declare:{arguments:{'x-qpid-priorities':3}}}})
In a priority queue, a message with higher priority will leap frog over messages with lower priority and will be picked up earlier. You need not define separate queues for each priority level.
The x-qpid-priorities parameter specifies how many distinct priorities are supported by the
queue.
Note though, priority based leapfrogging only works for consuming messages in a queue. Browsing doesn't respect priorities and you will see messages in the enqueue order.
Implementing separate queues for each priority isn't very useful, but if you insist on doing that, you will have to manage priority based consumption on your own. You can implement a consumer to check for messages in high priority queue, and then only check lower priority queue only if the first queue is empty.

Are there any tools to optimize the number of consumer and producer threads on a JMS queue?

I'm working on an application that is distributed over two JBoss instances and that produces/consumes JMS messages on several JMS queues.
When we configured the application we had to determine which threading model we would use, in particular the number of producing and consuming threads per queue. We have done this in a rather ad-hoc fashion but after reading the most recent columns by Herb Sutter in Dr Dobbs (in particular this one) I would like to size our threads in a more rigorous manner.
Are there any methods/tools to measure the throughput of JMS queues (in particular JBoss Messaging queues) as a function of the number of producing/consuming threads?
This is not really about a specific tool, but may be helpful.
Consumers:
Not sure what your inner architecture is, but let's assume it's an MDB reading in messages. I assert that your only requirement here for rigorous thread count sizing is to choose a maximum cap. If your MDB uses resources from a finite supplier like a JDBC connection pool, consider the maximum cap as the highest number of concurrent instances from that resource that you can tolerate taking. If the MDB's queue is remote, you probably want to consider remote connections (or technically, JMS sessions) a finite resource. If the MDB has less finite requirements (and the queue is local), your maximum cap becomes the number of threads, memory used and/or flat out CPU consumed by the working threads. The reasoning here is that the JBoss MDB container will simply keep allocating more MDB instances (and therefore threads) until the queue is empty or the maximum cap is reached. The only reason I can think of that you would really agonize over the minimum would be if the container's elapsed time or overhead to create new instances is above your tolerance and those operations are usually pretty small potatoes.
Producers
A general axiom of messaging is that producers nearly always outperform consumers. You would think this is pretty arbitrary, but it is a pattern I see recurring all the time, even in widely different messaging scenarios. Anyways, it's tough to say how the threading should work for the producer without knowing a bit about the application, but are you basically capable of [indefinitely] proportionally increasing the number of producer threads and the number of messages generated, or do you have some sort of cap where additional threads simply do not generate more messages ? I would guess it is the latter since most useful work has some limited data or calculation supplier. As I see it, the two drivers here are ordering and persistence.
First off, if you have strict message ordering where messages must be processed in strict (FPFP) First Produced First Processed then you're in a bit of a bind because you almost have to drop down to single threaded throughput unless you can devise some form of logical message demarcation (eg. a client number where any given client's messages are always sent to the same queue, but you may have multiple queues each serviced by one thread so each client is effectively FPFP).
Ordering aside, persistence is the next consideration in that if you have reliable and extensive message persistence, (or have a very high tolerance for message loss) just let the producer threads go to town. The messages will queue up reliably and eventually the consumers will [hopefully] catch up. However, if your message persistence message count or simple queue depths can potentially give you the willies when they get too high, here's where a tool might come in useful. If your producer thread count can be dynamically modified (which they can in many Java ThreadPool implementations) then you could sample the queue depths and raise or lower the producer thread count in accordance with the queue depth ranges you define, optionally to the point where if the consumers basically stall, so will the producers. I do not know of a specific tool that does this but between two JBoss servers this is fairly simple to whip up. Picking your queue depth-->producer thread count will be trickier.
Having said all that, I am going to actually read the article you linked to.....
I've got the perfect thing for you: IBM provide a free command line tool called perfharness.
It's aimed at benchmarking JMS providers, i.e. measuring the throughput of queues (single or multiple) given different numbers of producing or consuming threads.
Some features:
Send and consume messages at a fixed rate (msg/s) or at maximum rate possible on the queue
Use a specific number of threads
Use either JMS or native MQ
Can use data either generated randomly or taken from a file
Generates statistics telling you exactly how fast your queue is performing
The only down side is that it's not super intuitive, given the number of operations it supports. And IBM haven't open sourced it, which is a shame. However it sounds perfect for your purposes.

Resources