I am using WAS MQ 7.0 and there is my scenario;
I have a Cluster Queue Manager which name 'CLUSD' and two nodes for clustering which names 'N1' , 'N2'. N1 and N2 configurations are the same.
When I tried to send messages to CLUSD, the qmgr tried to send messages to their nodes (N1, N2); on this time everything is OK. But if one of these nodes becomes down; for example if N1 is down; I expect that CLUSD send all messages to N2 and when N1 becomes available; CLUSD send messages to both; but it is not working which means when N1 is down; CLUSD send some of the messages (not all of the messages) to N2 and keep other messages to his TRANSMIT.QUEUE and when N1 is available; CLUSD send not delivered messages to N1.
It seems when I send messages to CLUSD; this qmgr set a label as a destination qmgr, I think; and keep it, while destination becomes available.
What can I do for cover this.
How are you sending the messages to the queue? When you open a cluster queue open it with BIND_NOT_FIXED option. This will allow the receiving qmgr to decide which cluster queue to use at run time. This will also allow the receiving qmgr to route to the available cluster queues.
http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp?topic=%2Fcom.ibm.mq.csqzah.doc%2Fqc11040_.htm
WAS should allow default binding to not fixed.
http://pic.dhe.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=%2Fcom.ibm.websphere.express.doc%2Finfo%2Fexp%2Fae%2Fumj_pqdsxm.html
Related
I would like to know if it is possible to point a MQ Sender channel to a F5 Load balancer VIP address, rather than to a concrete MQ Server IP address and have the message delivered to one of two MQ servers in the F5 Cluster resources group. There are two MQ Servers in the F5 cluster.
What I'm trying to do, is determine if I could use this method in lieu of creating a MQ Cluster Gateway Queue manager, on yet more hardware and use the F5 LB feature to deliver the message to a cluster queue.
If I could capitalize on the F5 Load balancing, I'm thinking it would simulate a MQ Cluster Gateway Queue manager.
Would it work? Pitfalls?
You are looking at a couple of issues with this configuration:
With persistent messages the Sender and the corresponding Receiver channels increment a sequence number with each persistent message sent across the channel. If this sequence number does not match then the channel will not start unless it is reset to match on one end or the other (usually the Sender). This means that if the sender channel connects to QMGR1 behind the F5, the Receiver on QMGR1 will increment the sequence number, if the next time the sender channel connects it is routed to QMGR2, the sequence number of the Receiver will be lower than it is on the sender and it will not start.
Even if you were only sending non-persistent messages which do not increment the sequence number, you would still not achieve the same results as you would with having a cluster gateway in front of the two queue managers. Typically with a cluster configuration you would get a round robin of the messages between the two clustered queue managers. With a sender channel it is normally configured to start when a message is put to the associated transmission queue (TRIGGER) and to keep running until messages have not been sent for the length of time specified on the disconnect interval (DISCINT). Because of this you would not see workload balancing of messages between the two queue managers behind the F5. Depending on how you have the F5 configured and how long the disconnect interval is set to you would see a group of messages to go one queue manager and then a group of messages go to the other queue manager. The number of messages in each group would depend on the traffic pattern compared to the sender channel settings.
Note that even if the Sender channel is configured to connect only to one of the two clustered queue managers, if you set the cluster workload use queue (CLWLUSEQ) to the value of ANY for the clustered queue, you can have messages still round robin between the two instances of the queue. This would require that you have the remote queue (QREMOTE) on the Sender channel queue manager specify a Remote Queue Manager Alias (RQMA) as the remote queue manager name (RQMNAME) value. The RQMA would then allow the messages to resolve to any instance of the clustered queue including the local instance. Examples of objects are below for the Sender queue manager (SQMGR) and Receiver (first clustered) queue manager (CQMGR1) and the second clustered queue manager (CQMGR2):
SQMGR:
DEFINE QREMOTE(TO.CQLOCAL) RNAME(CQLOCAL) RQMNAME(CLUSTER.RQMA) XMITQ(CQMGR1)
DEFINE QLOCAL(CQMGR1) USAGE(XMITQ) INITQ(SYSTEM.CHANNEL.INITQ) TRIGDATA(SQMGR.CQMGR1) TRIGGER .....
DEFINE CHL(SQMGR.CQMGR1) CHLTYPE(SDR) XMITQ(CQMGR1) CONNAME(10.20.30.40) .....
CQMGR1:
DEFINE CHL(SQMGR.CQMGR1) CHLTYPE(RCVR) MCAUSER('notmqm') .....
DEFINE QREMOTE(CLUSTER.RQMA) RNAME('') RQMNAME('') XMITQ('')
DEFINE QLOCAL(CQLOCAL) CLUSTER('YOURCLUSTER') CLWLUSEQ(ANY)
CQMGR2:
DEFINE QLOCAL(CQLOCAL) CLUSTER('YOURCLUSTER') CLWLUSEQ(ANY)
Just want to confirm the correct way the MQ delivers messages to the MQOutput node. Recently came across a situation where i a felt bit confused. Here is the scenario.
I have a local queue on Qmgr,say(A) which receives messages from applications and have a local broker associated with this qmgr(A) with a message flow deployed, which consumes messsages from this queue and drops it to another local(L.B) queue on Queue manager (B).
To successfully deliver the messages to qmgr(B) do i have to
Create a remote queue definition on Qmgr(A) with transmission queue name matching the remote queue manager name, here(B)
MQOutput node value set as, queue->remote queue definition name on (A) and queue manager value as blank
or
to create only the transmission queue that matches with the name of the remote queue manager name, here(B).
MQOutput node value set as queue-> target local queue (L.B) and Queue manager value as (B).
When i follow with the first process, noticed messages reaching the destination and when i follow with the later one, noticed messages sitting up in the local queue itself.
Is there any necessity to always create 'n' number of remote queue definitions when it needs to drop messages to 'n' number of local queues?
Kindly guide me to better understand this. Thanks in advance to each of you.
There is no necessity to create n remote queue definitions, MQ is happy to accept output marked as destined for "Queue Name" on "Queue Manager Name".
You say that when using method 2. that your messages are "sitting up in the local queue". There are a few things you must check to solve this problem.
I assume the named queue L.B is defined on QMgr B and not QMgr A?
I assume the local queue the messages are sitting on is a transmission queue?
Have you defined a channel to read messages from the transmission queue they are stuck on?
Have you started the channel which should be moving the messages off the transmission queue to QMgr B?
I have a setup with two queues (no exchanges), let's say queue A and queue B.
One parser puts messages on queue A, that are consumed by ElasticSearch RabbitMQ river.
What I want now is to move messages from queue A to queue B when the ES river sends an ack to the queue A, so that I can do other processing in the ack'd messages, being sure that ES already has processed them.
Is there any way in RabbitMQ to do this? If not, is there any other setup that can guarantee me that a message is only in queue B after being processed by ES?
Thanks in advance
I don't think this is supported by either AMQP or the rabbitmq extensions.
You could drop the river and let your consumer also publish to elasticsearch.
Since a normal behavior is that the queues are empty you can just perform a few retries of reading the entries from elasticsearch (with exponential backoff), so even if the elasticsearch loses the initial race it will backoff a bit and you can then perform the task. This might require tuning the prefetch_size/count in your clients.
When a local queue manager receives the following message in it's AMQ error log:
09/13/12 08:00:19 - Process(3017.20) User(mqm) Program(amqrmppa_nd)
AMQ9544: Messages not put to destination queue.
EXPLANATION: During the processing of channel 'TO_QM_QD2T1_C1' one or
more messages could not be put to the destination queue and attempts
were made to put them to a dead-letter queue. The location of the
queue is 2, where 1 is the local dead-letter queue and 2 is the remote
dead-letter queue.
... what is the mechanism by which MQ exchanges such information? Is there a built in facility within the channel program API itself, or is the info exchanges as discrete messages placed on the SYSTEM.CLUSTER.COMMAND.QUEUE (in the case of a cluster)? Given that this could occur in a remote queue definition situation, with only simple sender/receiver channel pairs, and no corresponding COMMAND QUEUE necessarily, I could imagine that it would be handled via the channel process communications... just wondering...
The channel agents have a bi-directional communication between them, even though messages flow in only one direction. When a message fails to find the destination at the remote end there are several possibilities for what happens next. The channel will only continue to run if the message can be successfully put somewhere and the first place to try is the remote DLQ. If that fails, the local MCA must either relocate the message or stop the channel. Therefore, the two message channel agents work out between them what happens and what the status of the channel should be.
The peculiar wording of the error message reflects that the different dispositions of the message derive from within the same code path and exception handling and the optimization of WMQ. The MCA knows the message was put to a DLQ at that point and rather than having two different error messages or logic to work out the wording on which DLQ was used, it just drops a number into a template. Thus a single error message and streamlined logic are used for both possibilities.
In MQ explorer, If I open up the properties for a particular queue (X), what property will tell me if the queue is defined as an error handling queue for another queue (Y)? i.e. If Y fails to process the message and the transaction rolls back, it will put the message on X.
Any queue designated as an error or exception queue (backout queue in WebSphere MQ terminology) is an ordinary queue. The BOQNAME on a primary queue points to the backout queue but there is no attribute of the backout queue that points back to the primary queue. This might be a one-to-many relationship because any number of primary queues might point to a single backout queue.
One way to do this in WMQ Explorer would be to make sure that BOQNAME is visible in the display and then sort the queue list on that column. Then look for all instances with your backout queue name in them.